From checker at panix.com Fri Jul 1 01:32:18 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:32:18 -0400 (EDT) Subject: [Paleopsych] CHE: Exploring the Good That Comes From Shame Message-ID: Exploring the Good That Comes From Shame The Chronicle of Higher Education, 5.7.1 http://chronicle.com/weekly/v51/i43/43a01102.htm By PETER MONAGHAN Elspeth Probyn, professor of gender and cultural studies, University of Sydney, Australia The cultural emphasis on self-esteem and pride veils the benefits of shame, Ms. Probyn argues in Blush: Faces of Shame (University of Minnesota Press). Shame, a universal feeling, alerts us to examine what we are and would like to be, she says. When there is "public silence around shame, it doesn't get discussed, it just gets more deeply embedded." Q. Why the title "Blush"? A. All emotions are embodied, but shame feels the most embodied. Of course blushes show more clearly on freckled Celts like myself, but we all feel that heat on our face. There are some deep similarities that humans have, and we have perhaps overly fixated on the differences during the last 20 years. Q. What good comes from shame? A. A kind of painful good. It would be silly to say that shame doesn't hurt and isn't sometimes very painful, but it does make you think about what you hold dear, whether that be at an individual or collective level, or as a nation. It is one of the emotions that most clearly throw into relief the values we have. Q. Is shame, which may prompt self-improvement, more trustworthy than pride, which demands but does not always deserve respect? A. Yes. Part of my interest in shame came from thinking about the limits of pride, especially when it's used in queer pride, or fat pride, or whatever. There is a real limit to those politics. Shame could be used to highlight what we ought to be proud of but haven't quite achieved. Q. If shame is worthwhile, how about shaming? A. Shaming is very limited in its value. It requires that someone stand on high and point the finger. Some strands of feminism have used shaming, but it's the experiencing of shame rather than the wielding of shame that can be good. Q. How can shame inform ethics and politics? A. Well, if, for instance, after Abu Ghraib, we'd all just paused to say, "Oh, my God." But you have to have a political culture built up that's capable of those moments of pausing and reflecting. The most courageous governments would be capable of that -- the ones that are most deeply rooted in a democratic sense. Q. How do people who have been instilled with a harmful sense of shame early in life discern negative shame from positive, elucidating shame? A. That might be helped by distinguishing clearly between shame and guilt. In the internal, intrapsychic, intrasubjective sense, guilt can become just lodged there, whereas shame is more mercurial and doesn't seem to lodge anywhere. It returns and forces us to think again about our actions. From checker at panix.com Fri Jul 1 01:32:31 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:32:31 -0400 (EDT) Subject: [Paleopsych] NS: Genes blamed for fickle female orgasm Message-ID: Genes blamed for fickle female orgasm http://www.newscientist.com/article.ns?id=mg18625033.900&print=true * 11 June 2005 * Rowan Hooper IS THIS the ultimate excuse for poor performance in bed? "Sorry, darling," the man says, just before falling asleep. "It's your genes." According to a study published this week, up to 45 per cent of the differences between women in their ability to reach orgasm can be explained by their genes. Despite decades of surveys and conjecture about the role of culture, upbringing and biology in female sexual function, from Freud in 1905 to the Hite report in 1976, this is the first study of the role of a woman's genes. Its findings suggest there is an underlying biological basis to a woman's ability to achieve orgasm. Whether that basis is anatomical, physiological or psychological remains uncertain, says Tim Spector of the twin research unit at St Thomas' Hospital in London, who carried out the study. "But it is saying that it is not purely cultural, or due to peer pressure, or to differences in upbringing or religion," he says. "There are wide differences between women and a lot of these differences are due to genes." Spector's team asked more than 6000 female twins to fill out a confidential questionnaire about how often they achieved orgasm during intercourse and masturbation. They received 4037 complete replies, which included answers from 683 pairs of non-identical twins and 714 pairs of identical twins. The women's ages ranged from 19 to 83, and about 3 per cent were lesbian or bisexual. Only 14 per cent of the women reported always experiencing orgasm during intercourse. Another 32 per cent of the women reported that they were unable to achieve orgasm more than a quarter of the time, while 16 per cent never achieved it at all. Comparing the results from identical and non-identical twins suggests that 34 per cent of this variation in ability to orgasm during intercourse is genetic. The idea behind twin studies is that pairs of twins grow up in similar environments. So if identical twins are more similar in some way than non-identical twins, then that similarity must be down to their identical genes rather than the environment. Unsurprisingly, more women were able to achieve orgasm through masturbation, with 34 per cent saying they could always do so. However, the figure for those who could never achieve it was only slightly lower, at 14 per cent. The analysis suggests that 45 per cent of this variation is genetic (Biology Letters, DOI: 10.1098/rsbl.2005.0308). Spector says he was surprised by the similarity in the numbers of women unable to experience orgasm either through intercourse or masturbation. "With masturbation there are fewer external factors - i.e. men," he says. "So the higher heritability value for masturbation gives us a clearer picture of what's going on." The discovery of a genetic basis for the ability of women to orgasm raises questions about its evolution. One theory is that it is a tool for mate selection, the idea being that males best able to bring females to orgasm are also the best males to help raise children. Another is that the female orgasm produces movements that increase sperm uptake, and therefore fertility. But studies of other primates suggest otherwise. Female stump-tailed macaques have orgasms too - but mainly during female-female mountings, which hardly supports the fertility or mate-selection idea. Bonobos engage in highly promiscuous sex and mutual masturbation, complete with orgasms, a practice that is thought to promote group cohesion. This supports yet another theory: that orgasm is important in bonding. But even if orgasm does play this role, it cannot be crucial in humans. The finding that many women cannot achieve orgasm because they do not have the genes for it shows that the ability to orgasm is not a trait for which there has been strong evolutionary selection, says Elisabeth Lloyd of Indiana University in Bloomington, author of The Case of the Female Orgasm. This supports her theory that as far as orgasms are concerned, women have been riding on the genetic coat-tails of male evolution, and that the female orgasm is merely an accidental echo of the male one, the equivalent of male nipples. Lloyd says the findings also challenge the notion that the failure to achieve orgasm represents "female sexual dysfunction", an idea popular with companies keen to sell to remedies for this so-called disorder. "What definition of 'normal' could possibly justify labelling a third of women as 'abnormal'?" she asks. Even if struggling to achieve orgasm is nothing unusual, Spector says it might be possible to find ways to make it easier. Though hundreds of genes could be involved, "that doesn't mean we couldn't find the genes and pathways, if this was taken more seriously as a problem", he says. From checker at panix.com Fri Jul 1 01:32:38 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:32:38 -0400 (EDT) Subject: [Paleopsych] NS: The Case of the Female Orgasm: Bias in the science of evolution by Elisabeth A Lloyd Message-ID: The Case of the Female Orgasm: Bias in the science of evolution by Elisabeth A Lloyd http://www.newscientist.com/article.ns?id=mg18624991.800&print=true * 14 May 2005 * Gail Vines SEXUAL climax for the male is, evolutionarily speaking, rather dull. Its raison d'?tre seems crystal clear. Orgasm fosters men's reproductive success, because it is linked to the ejaculation of sperm. Only devotees of tantric yoga, apparently, can achieve orgasm without ejaculation. But women too are able to experience orgasm. Sexologists have documented the "clonic contraction of pelvic and abdominal muscles initiated by a spinal reflex", and, in Elisabeth Lloyd's favourite definition, the "combination of waves of a very pleasurable sensation and mounting of tensions, culminating in a fantastic sensation and release of tension". What has puzzled generations of thinkers, however, is why women, as well as men, should have evolved the capacity for such sexual pleasure. Over the past century, scores of biologists have sought an answer in natural selection. In The Case of the Female Orgasm, Lloyd, who is a professor of biology at Indiana University, has totted up 21 alternative explanations. All these "adaptationist" theories share one thing: the belief that women have evolved the capacity for orgasm because it fosters their reproductive success. In one of the most popular accounts, female orgasm is the cement in pair bonds: mutual pleasure fosters happy monogamous couples who share childcare ever after. The latest idea, "sperm competition", is more hydraulic in tone. It argues that orgasmic contractions of the uterus are designed to suck up sperm from the vagina, fostering the reproductive success of the male who gives pleasure to his partner. There is one big problem with all these ideas: no study has ever established a reliable link between a woman's orgasmic capabilities and her fertility or fecundity. And that, says Lloyd, should immediately set warning lights flashing. And there's another problem: the glib assumption that female orgasm is designed to happen during heterosexual intercourse. In fact, the data shows that many women struggle to climax during conventional penetrative sex, and usually do so only with direct clitoral stimulation. Yet during masturbation both women and men can achieve orgasm in about four minutes. The conclusion, Lloyd argues, must surely be that the female orgasm has no biological function. Rather, it's on a par with the male nipple - an accident of shared developmental pathways in the early embryo. Because women need nipples to suckle their babies, men end up with rudimentary versions too. They may not give milk, but like the female's they have erotic sensibilities. As for genitalia, because men need ejaculatory penises, women end up with clitorises capable of similar sexual pleasures. Lloyd reckons that biases in evolutionary thinking have blinkered generations of mostly male biologists. It is time to give up the adaptationist's fallacy and face facts. The late Stephen Jay Gould, who encouraged Lloyd's long-standing investigation, must be cheering from above. From checker at panix.com Fri Jul 1 01:33:08 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:33:08 -0400 (EDT) Subject: [Paleopsych] Cordis: Technology could grow beyond human control, warns Millennium report Message-ID: Technology could grow beyond human control, warns Millennium report http://dbs.cordis.lu/cgi-bin/srchidadb?CALLER=NHP_EN_NEWS&ACTION=D&SESSION=&RCN=EN_RCN_ID:24053 [Only the 2004 report, which costs $50, is on the site.] [Date: 2005-06-28] Many people still do not appreciate how fast science and technology (S&T) will change over the next 25 years, and given this rapid development along several different fronts, the possibility of technology growing beyond human control must now be taken seriously, according to a new report. The State of the Future 2005 report is produced by the United Nations University's Millennium Project - a global think tank of foresight experts, academics and policy makers. It analyses current global trends and examines in detail some of the current and future challenges facing the world. Setting the scene, the report states: 'Future synergies among nanotechnology, biotechnology, information technology and cognitive science can dramatically improve the human condition by increasing the availability of food, energy and water and by connecting people and information anywhere. The effect will be to increase collective intelligence and create value and efficiency while lowering costs.' However, it warns that 'a previous and troubling finding from the Millennium Project still remains unsolved: although it is increasingly clear that humanity has the resources to address its global challenges, unfortunately it is not increasingly clear how much wisdom, goodwill and intelligence will be focussed on these challenges.' The report argues that because the factors that caused the acceleration of S&T are themselves accelerating, the rate of change in the past 25 years will appear slow compared to the rate of change in the next 25 years. 'To help the world cope with the acceleration of change, it may be necessary to create an international S&T organisation to arrange the world's science and technology knowledge as well as forecasts of potential consequences in a better Internet-human interface,' it argues. Taking one particular example - that of nanotechnology - the report predicts that this field will deliver extraordinary benefits for humanity, but warns that little is currently known about the environmental and health risks of nanomaterials. Since the military is currently a major player in the development of nanotechnology, the report proposes military research to help understand and manage these risks. The most important questions to pursue, according to the report, are: how are nanoparticles absorbed into the body through the skin, lungs, eyes, ears and alimentary canal? Once in the body, can nanoparticles evade natural defences of humans and other animals? What are the potential exposure routes of nanomaterials - both airborne and waterborne? How biodegradable are nanotube-based structures? The authors suggest that a classification system will be needed to provide a framework within which to make research judgements and keep track of the knowledge regarding potential nanotech pollution. 'Toxicologists and pharmaceutical scientists will have to be brought together to investigate nanoparticles' ability to evade cell defences to target disease,' they add. Returning to the wider challenges facing humanity, the report notes that national decision makers are rarely trained in the theory and practice of decision making, and argues that advanced decision support software could help. 'Formalized ethics and decision training for decision makers could result in a significant improvement in the quality of global decisions,' it concludes. For further information, please consult the following web address: http://www.acunu.org/millennium/ Category: Publication Data Source Provider: American Council for the United Nations University Document Reference: Based on the State of the Future 2005 report Subject Index : Scientific Research; Social Aspects; Forecasting; Materials Technology From checker at panix.com Fri Jul 1 01:33:15 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:33:15 -0400 (EDT) Subject: [Paleopsych] NS: Interview: The Koran to quantum physics Message-ID: Interview: The Koran to quantum physics http://www.newscientist.com/article.ns?id=mg18625051.600&print=true [I don't think Mr. Mencken wrote enough about Islam to get into its fit with science. He was certainly opposed to all attempts to fit Christianity with science.] * 25 June 2005 * Michael Bond Iran is changing. A society once closed to the outside world has acquired a hunger for knowledge and a thirst for cutting-edge ideas. The number of publications by Iranian scientists in international journals has quadrupled over the past decade. Young people in particular want more Kuhn and less Khomeini. And they voted overwhelmingly against hardline candidates in last week's elections. But what about the clerics who have led Iran since 1979? How comfortable are they with modern science and technology? Do they oppose it? Can they learn to live with it? Do they believe it should be "Islamicised"? Western ways of thinking and doing have long held a fascination for Iran's religious leaders, from before the Islamic revolution of 1979 that deposed the Shah. When the Shah banned Ayatollah Khomeini's speeches, for example, his supporters distributed them on audio cassettes in the hundreds of thousands. Similarly, desktop publishing was eagerly adopted to produce glossy magazines extolling the virtues of post-revolution Iran. Unlike most other Muslim countries, Iran has several institutions dedicated to enabling clerics to test their knowledge and ideas against those developed in modern universities. Mofid University is the best known. In just 10 years, it has developed a reputation in the fields of philosophy and human rights, and organises exchanges with universities abroad, including the US. Michael Bond travelled to Qom - Iran's spiritual hub, birthplace of the Islamic revolution and home of Mofid University - to ask Masoud Adib, Mofid's head of philosophy, about Islam and the challenge of science. Is there such a thing as Islamic science? We cannot really talk of Islamic science. We can talk of Islamic philosophy, political science, sociology, and maybe Islamic psychology, but not Islamic physics or chemistry. Sciences like physics and chemistry are neutral. However, in science it is important to distinguish between discovery and judgement - between collecting data or experimentation, and evaluating and judging what has been collected. When researchers evaluate data, they all use the same methodology, whether they are Muslim or Christian, religious or secular. But when a researcher is collecting data or conducting experiments, things like religion, culture or even the attitude of the researcher make a difference. There might be differences between the way a Muslim collects data and someone else, just as there are differences in the way women and men collect data, or people from different cultures. But this does not mean the science produced is Islamic science. How does an Islamic approach to experimenting and data collection differ from other approaches? In an Islamic culture, the reason a person seeks knowledge is to know God, to seek a better understanding of God. That is the motivation. Someone from another culture or religion may do it for another reason: to seek particular technologies, for example, or just to know reality. People who do science for different reasons will probably look at different areas, or approach a problem from different sides. What would a Muslim scientist seek? In Islam, science or knowledge should not be sought solely for the sake of curiosity. Research should always be targeted. In a world where there is a lot of disease and many complex problems such as poverty, famine, drought and lack of education, scientists should not be allowed to just go after scientific curiosity for its own sake. It is the duty of scientists to try to solve these problems. So a scientist should not spend all his time in a laboratory working for himself, satisfying his own curiosity. He needs to always consider whether what he is doing is in line with what God wants. If people are guided so much by religion, how can they do good, objective science? Religion offers a framework for life. It helps you from the moment you get up in the morning until you go to sleep at night. You have to live within it. But that doesn't mean that in every moment of the day you have to take your instructions from religion. Rather, it means that you have to live for religious targets, and that the values from your religion should govern everything you do. In this there is no difference between Islam and other religions. Within that religious framework, you have to learn how to secularise. Day-to-day life has to be based on secular knowledge: for instance, how to eat your breakfast or work in an office. So you can have knowledge of a secular science within the framework of a religious life. How does that work in practice? Where do you draw the line between the two? People tend to make two mistakes. One is to try to derive the details of life from religion - for example, looking to religion for the answers to why everything happens, the answers to all the practical things in life. This will not help us run a society. The other mistake is to loosen the religious framework so much that you think you can derive the ultimate aim of life from the empirical. One of the major reasons a lot of Muslims do not do well in science is that they make the first mistake. A lot of modern societies run into difficulties and cannot adapt to problems in life because of the second mistake. How can Iran modernise and develop in science and technology without sacrificing the values and traditions of Islam? Iran has already modernised in some areas. One of the problems of this modernising is that it is not a result of blossoming from the inside; it has come from the outside. Over the past 150 years, a gap has opened up in Iranian society, with one group going for modernity and another for tradition. The revolution in Iran had roots in both modernity and tradition, and I believe the gap between the two has been gradually closing. However, if modernity is not based on a nation's culture it can do serious damage. This is what happened in the west. This does not mean we have to escape from modernity. Rather we have to try to minimise the damage that arises from the clash with tradition. This means that in our individual lives, and as a society, we have to keep our eyes on religious targets while at the same time making best use of modern knowledge. How would that work in science? One of the duties of a scientist from any culture is to progress in science and knowledge while preserving his moral values, not as a religious person but as a human being. You have to produce knowledge, but you may have to restrict yourself from certain areas. It is delicate - you lose if you hold back from research, you lose if you ignore your moral values. Are there any modern technologies that you think are a particular threat to Islam? On one level, technology is simply a tool and people have to learn how to use it. Of course it depends how you use it. The important thing is that people use technology in the best way for a country and minimise the damage. But there is another deeper level at which to look at technology. New technologies are deeply tied up with spirituality and morality, for they influence how we behave. For example, whether I choose to go to work on horseback or in a car will affect how I behave during the journey and the effect I have on others. Whenever a new technology arises, such as the internet, it is essential to have a dialogue about how it is going to affect us. We need time to contemplate such changes in life. From checker at panix.com Fri Jul 1 01:33:38 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:33:38 -0400 (EDT) Subject: [Paleopsych] NS: Did humans evolve in fits and starts? Message-ID: Did humans evolve in fits and starts? http://www.newscientist.com/article.ns?id=dn7539&print=true * 17:30 17 June 2005 * Gaia Vince Humans may have evolved during a few rapid bursts of genetic change, according to a new study of the human genome, which challenges the popular theory that evolution is a gradual process. Researchers studying human chromosome 2 have discovered that the bulk of its DNA changes occurred in a relatively short period of time and, since then, only minor alterations have occurred. This backs a theory called punctuated equilibrium which suggests that evolution actually occurred as a series of jumps with long static periods between them. Evolutionary stages are marked by changes to the DNA sequences on chromosomes. One of the ways in which chromosomes are altered is through the duplications of sections of the chromosomes. These DNA fragments may be duplicated and inserted back into the chromosome, resulting in two copies of the section. Evan Eichler, associate professor of genomic sciences at the University of Washington in Seattle, US, and colleagues looked at duplicated DNA sequences on a specific section of chromosome 2, to compare them with ape genomes and Old World monkey genomes. They expected to find that duplications had occurred gradually over the last few million years. Instead, they found that the big duplications had occurred in a short period of time, relatively speaking, after which only smaller rearrangements occurred. Eichler found the bulk of the duplications were present in the genomes of humans, chimpanzees, gorillas and orang-utans, but were absent in Old World monkeys - such as baboons and macaques. Narrow window An analysis of the degree of chromosomal decay for this section showed that the major duplications occurred in the narrow window of evolutionary time between 20 million and 10 million years ago, after human ancestors had split from Old World monkeys, but before the divergence of humans and great apes. It is unclear why [these duplication] events occurred so frequently during this period of human and great ape evolutionary history. It is also unclear as to why they suddenly cease, at least in this region of chromosome 2, Eichler says. Other regions may show different temporal biases. The important implication here is that episodic bursts of activity challenge the concept of gradual clock-like changes during the course of genome evolution, he says. Since duplications are important in the birth of new genes and large-scale chromosomal rearrangements, it may follow that these processes may have gone through similar episodes of activity followed by quiescence. Growing evidence Laurence Hurst, professor of evolutionary biology at the University of Bath in the UK, says the study was very interesting, although he would like to see this punctuated evolution demonstrated for other chromosomes, to be more confident that this is a general pattern. There is growing evidence that evolutionary processes may occur in bursts. We now know, for example, that 50 million years ago there was a burst of activity that resulted in lots of new genes being produced, he told New Scientist. It is unknown what effect the sudden duplication activity may have had on chromosome 2. Eichler theorises that it may have resulted in genes for increased brain size or pathogen evasion. If specific regions of chromosomes can have very punctuated events, it means our models based on gradual evolution are probably wrong, he says. The group will continue looking at the chromosome duplications to try and correlate them with changes in gene function or expression. Journal reference: Genome Research (vol 15, p 914) Related Articles Hominid inbreeding left humans vulnerable to disease http://www.newscientist.com/article.ns?id=dn6920 25 January 2005 Jumping genes help choreograph development http://www.newscientist.com/article.ns?id=mg18424702.700 23 October 2004 Wonderful spam http://www.newscientist.com/article.ns?id=mg18224496.100 29 May 2004 Weblinks * [18]Eichler Laboratory, University of Washington, Seattle, US * [19]http://eichlerlab.gs.washington.edu/ * [20]Laurence Hurst, Bath University, UK * [21]http://www.bath.ac.uk/bio-sci/hurst.htm * [22]Genome Research * [23]http://www.genome.org/ References 18. http://eichlerlab.gs.washington.edu/ 19. http://eichlerlab.gs.washington.edu/ 20. http://www.bath.ac.uk/bio-sci/hurst.htm 21. http://www.bath.ac.uk/bio-sci/hurst.htm 22. http://www.genome.org/ 23. http://www.genome.org/ E-mail me if you have problems getting the referenced articles. From checker at panix.com Fri Jul 1 01:45:53 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:45:53 -0400 (EDT) Subject: [Paleopsych] WSJ: Imprinted Genes Offer: Key to Some Diseases-- And to Possible Cures Message-ID: Imprinted Genes Offer: Key to Some Diseases-- And to Possible Cures http://online.wsj.com/article/0,,SB111956911033268215,00.html By SHARON BEGLEY June 24, 2005; Page B1 According to the old joke, the homely but brilliant male scientist married the gorgeous but dim model figuring their children would have her looks and his brains. He was crushed when they had her brains and his looks. The scientist was clearly not among those studying a booming new area of genetics. If he had been, he would have known that whether a child's traits are shaped by mom's genes or dad's genes isn't a simple matter of recessiveness or dominance, let alone of pure luck, as the textbook wisdom says. Instead, some genes come with molecular tags saying (in biochemical-ese), "I come from mom; ignore me," or "You got me from dad; pretend I'm not here." Such genes are called imprinted. Unlike recessive or dominant genes (such as for black or blond hair), which are composed of different molecules, these genes are identical except for the silencer tag sitting atop them. The result is that if the active gene is defective, there is no working backup; a healthy but silenced gene from the other parent can't step into the breach. In the joke, mom's beauty genes and dad's brainy genes were silenced, leaving mom's dimwitted genes and dad's homely ones to call the shots. No one has reliably identified genes for beauty or for brains, let alone figured out whether mom's or dad's count (or whether this explains male-pattern baldness). But real imprinted genes are hitting the big time. Imprinting may be one reason people seem to inherit conditions such as autism, diabetes, Alzheimer's disease, male sexual orientation, obesity and schizophrenia from only one side of the family. At least one biotechnology company is planning to scan the entire human genome for imprinted genes (detectable with a biochemical test), hoping to use the data to diagnose incipient cancers. Almost all imprinting happens automatically, long before birth, but in some cases it can result from outside interference. Toxic chemicals, for instance, may eliminate the silencer tag, causing potentially harmful effects that can be transmitted to future generations. (Two points to readers who say, "Lamarck lives!") The number of human genes where the parent-of-origin matters keeps rising. According to a new computer algorithm, about 600 mouse genes are likely to be imprinted, scientists at Duke University report in Genome Research. If that 2.5% rate holds for humans -- and virtually every mouse gene has a human counterpart -- then we have hundreds of imprinted genes, too. Among the genes where the parent of origin matters are three on chromosome 10. Only the copies from mom, studies suggest, are turned on. One, expressed in the brain, is linked to late-onset Alzheimer's disease. Another is linked to male sexual orientation, and a third to obesity. With dad's contribution silenced, if there is anything unusual in the copy from mom, that will determine the child's trait. "For Alzheimer's, if the mutation is in dad's gene you'll never see an effect, but if it's in mom's you're at risk for the disease," says Duke's Randy Jirtle. A gene on chromosome 9, linked to autism, seems to count only if it came from dad. One on chromosome 2 and one on 22 are associated with schizophrenia; only the copies from dad count. Having a family tree mostly free of these diseases is therefore no assurance of good health. If the disease runs on dad's side, his gene may be defective, and that is the one that matters. As they discover more imprinted genes, scientists are seeing that the silencing tag can be knocked off, with dire consequences. An animal study published this month suggests how. When fetal rats were exposed to two toxic chemicals -- a fungicide called vinclozolin commonly used in vineyards and a pesticide called methoxychlor -- they grew up to have slower- and fewer-than-normal sperm, Michael Skinner of Washington State University and colleagues report in the journal Science. The abnormalities were inherited by the rats' sons, grandsons and great-grandsons. "That environmental toxins can induce a transgenerational genetic change is a phenomenon we never knew existed," Prof. Skinner says. How does it occur? Probably not through harmful mutations, which become rarer with each generation. But imprinting changes, of which Prof. Skinner's group has detected 50 and counting, persist through the generations. The ink is barely dry on the human genome project, but already researchers are onto the "second genetic code," or the pattern of silencers on our DNA. Using a technology called MethylScope ("methyl" is the DNA silencer), "we will map this second genetic code to see which genes are imprinted and identify any differences between normal and cancerous cells," says Nathan Lakey, chief executive of Orion Genomics, a closely held biotechnology concern. Those differences may become the foundation for molecular diagnostic tests within three years, perhaps starting with colon cancer. Normally, the copy of a gene called IGF2 that you get from dad is active, the copy from mom silenced. In 10% of us, though, mom's copy has thrown off the silencer, leading to a greater risk of colorectal cancer. Detecting that unsilencing could provide an early warning of the disease. * You can e-mail me at sciencejournal at wsj.com2. From checker at panix.com Fri Jul 1 17:37:25 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:37:25 -0400 (EDT) Subject: [Paleopsych] Newswise: Study Shows How Sleep Improves Memory Message-ID: Study Shows How Sleep Improves Memory http://www.newswise.com/p/articles/view/512800/ Source: [1]Beth Israel Deaconess Medical Center Released: Tue 28-Jun-2005, 11:05 ET Newswise -- A good night's sleep triggers changes in the brain that help to improve memory, according to a new study led by researchers at Beth Israel Deaconess Medical Center (BIDMC). These findings, reported in the June 30, 2005, issue of the journal Neuroscience and currently published on-linee, might help to explain why children - infants, in particular - require much more sleep than adults, and also suggest a role for sleep in the rehabilitation of stroke patients and other individuals who have suffered brain injuries. "Our previous studies demonstrated that a period of sleep could help people improve their performance of `memory tasks,' such as playing piano scales," explains the study's lead author Matthew Walker, PhD, director of BIDMC's Sleep and Neuroimaging Laboratory. "But we didn't know exactly how or why this was happening. "In this new research, by using functional magnetic resonance imaging (fMRI), we can actually see which parts of the brain are active and which are inactive while subjects are being tested, enabling us to better understand the role of sleep to memory and learning." New memories are formed within the brain when a person engages with information to be learned (for example, memorizing a list of words or mastering a piano concerto). However, these memories are initially quite vulnerable; in order to "stick" they must be solidified and improved. This process of "memory consolidation" occurs when connections between brain cells as well as between different brain regions are strengthened, and for many years was believed to develop merely as a passage of time. More recently, however, it has been demonstrated that time spent asleep also plays a key role in preserving memory. In this new study, twelve healthy, college-aged individuals were taught a sequence of skilled finger movements, similar to playing a piano scale. After a 12- hour period of either wake or sleep, respectively, the subjects were tested on their ability to recall these finger movements while an MRI measured the activity of their brain. According to Walker, who is also an Assistant Professor of Psychiatry at Harvard Medical School, the MRI results showed that while some areas of the brain were distinctly more active after a period of sleep, other areas were noticeably less active. But together, the changes brought about by sleep resulted in improvements in the subjects' motor skill performance. "The cerebellum, which functions as one of the brain's motor centers controlling speed and accuracy, was clearly more active when the subjects had had a night of sleep," he explains. At the same time, the MRIs showed reduced activity in the brain's limbic system, the region that controls for emotions, such as stress and anxiety. "The MRI scans are showing us that brain regions shift dramatically during sleep," says Walker. "When you're asleep, it seems as though you are shifting memory to more efficient storage regions within the brain. Consequently, when you awaken, memory tasks can be performed both more quickly and accurately and with less stress and anxiety." The end result is that procedural skills - for example, learning to talk, to coordinate limbs, musicianship, sports, even using and interpreting sensory and perceptual information from the surrounding world -- become more automatic and require the use of fewer conscious brain regions to be accomplished. This new research may explain why children and teenagers need more sleep than adults and, in particular, why infants sleep almost round the clock. "Sleep appears to play a key role in human development," says Walker. "At 12 months of age, infants are in an almost constant state of motor skill learning, coordinating their limbs and digits in a variety of routines. They have an immense amount of new material to consolidate and, consequently, this intensive period of learning may demand a great deal of sleep." The new findings may also prove to be important to patients who have suffered brain injuries, for example, stroke patients, who have to re-learn language, limb control, etc. "Perhaps sleep will prove to be another critical factor in a stroke patient's rehabilitation," he notes, adding that in the future he and his colleagues plan to examine sleep disorders and memory disorders to determine if there is a reciprocal relationship between the two. "If you look at modern society, there has in recent years been a considerable erosion of sleep time," says Walker. Describing this trend as "sleep bulimia" he explains that busy individuals often shortchange their sleep during the week - purging, if you will - only to try to catch up by "binging" on sleep on the weekends. "This is especially troubling considering it is happening not just among adults, but also among teenagers and children," he adds. "Our research is demonstrating that sleep is critical for improving and consolidating procedural skills and that you can't short-change your brain of sleep and still learn effectively." Study co-authors include BIDMC researchers Gottfried Schlaug, MD, PhD, Robert Stickgold, PhD, David Alsop, PhD and Nadine Gaab, PhD. This study was supported by grants from the National Institutes of Health and the Dana Foundation. Beth Israel Deaconess Medical Center is a patient care, teaching and research affiliate of Harvard Medical School, and ranks third in National Institutes of Health funding among independent hospitals nationwide. BIDMC is clinically affiliated with the Joslin Diabetes Center and is a research partner of Dana-Farber/Harvard Cancer Center. BIDMC is the official hospital of the Boston Red Sox. For more information, visit [2]http://www.bidmc.harvard.edu. From checker at panix.com Fri Jul 1 17:37:37 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:37:37 -0400 (EDT) Subject: [Paleopsych] The Scientist: The Uncertain Future for Central Dogma Message-ID: The Uncertain Future for Central Dogma http://www.the-scientist.com/2005/6/20/20/1/printerfriendly Volume 19 | [2]Issue 12 | Page 20 | Jun. 20, 2005 Uncertainty serves as a bridge from determinism and reductionism to a new picture of biology By [4]Arnold F. Goodman, [5]Cl?udia M. Bellato and [6]Lily Khidr Kenneth Eward/BioGrafx/Photo Researchers Inc. Nearly two decades ago, Paul H. Silverman testified before Congress to advocate the Human Genome Project. He later became frustrated when the exceptions to genetic determinism, discovered by this project and other investigations, were not sufficiently incorporated in current research and education. In "Rethinking Genetic Determinism,"^[7]1 Silverman questioned one of the pillars of molecular genetics and documented the need for determinism's expansion into a far more valid and reliable representation of reality. He would receive correspondence from all over the world that reinforced this vision. Silverman firmly believed that we needed a wider-angled model, with a new framework and terminology, to display what we know and to guide future discovery. He also viewed this model as being a catalyst for exploring uncertainty, the vast universe of chance differences on a cellular and molecular level that can considerably influence organismal variability. Uncertainty not only undermines molecular genetics' primary pillars of determinism and reductionism, but also provides a bridge to future research. PILLARS CHALLENGED [0890-3670-050620-20-1-2.jpg] Arnold Goodman (left) is an associate director of the Center for Statistical Consulting at the University of California, Irvine. Cl?udia Bellato (center) is an independent researcher at CENA, University of S?o Paulo, Brazil. Lily Khidr (right) is a PhD candidate at UC-Irvine. They dedicate this article to the memory of Paul Silverman and thank Nancy, his wife, for her assistance. Various commentaries detail deviation from determinism within the cellular cycle. Here we use the term cellular cycle not in the traditional sense, but rather to describe the cyclical program that starts with gene regulation through transcription, translation, post-processing and back into regulation. Richard Strohman at UC-Berkeley describes the program in terms of a complex regulatory paradigm, which he calls "dynamic epigenetics." The program is dynamic because regulation occurs over time, and epigenetic because it is above genetics in level of organization.^[8]2 "We thought the program was in the genes, and then in the proteins encoded by genes," he wrote, but we need to know the rules governing protein networks in a cell, as well as the individual proteins themselves. John S. Mattick at the University of Queensland focuses upon the hidden genetic program of complex organisms.^[9]3 "RNAs and proteins may communicate regulatory information in parallel," he writes. This would resemble the advanced information systems for network control in our brains and in computers. Indeed, recent demonstrations suggest that RNA might serve as a genetic backup copy superseding Mendelian inheritance.^[10]4 Gil Ast of Tel Aviv University writes: "Alternative splicing enables a minimal number of genes to produce and maintain highly complex organisms by orchestrating when, where, and what types of proteins they manufacture."^[11]5 About 5% of alternatively spliced human exons contain retrotransposon Alu sequences. These elements represent an engine for generating alternative splicing. Thus we see a genetic control system regulated by protein products, RNAs, and interventions from DNA itself. Yet throughout, the consideration of genetic uncertainty as a bridge to cellular behavior is conspicuously absent. Genetic reductionism, the other pillar of molecular genetics, has many challengers. Among them is Stephen S. Rothman at UC-Berkeley, who described the limits of reductionism in great detail within his comprehensive and well-constructed book.^[12]6 A more recent publication by Marc H.V. Van Regenmortel at France's National Center for Scientific Research updated this assessment by discussing not only the deficiencies of reductionism, but also current ways of overcoming them. "Biological systems are extremely complex and have emergent properties that cannot be explained, or even predicted, by studying their individual parts."^[13]7 NEW CELL MODEL Molecular genetics appears to be at a crossroads, since neither determinism nor reductionism is capable of accurately representing cellular behavior. In order to transition from a passive awareness of this dilemma to its active resolution, we must move from simply loosening the constraints of determinism and reductionism toward a more mature and representative combination of determinism, reductionism, and uncertainty. To facilitate this expansion, we propose a model for the cellular cycle. Although only a framework, it provides a vehicle for broader and deeper appreciation of the cell. The figure on page 25 provides a novel structure for understanding current knowledge of the cycle's biological stages, as well as a guide for acquiring new knowledge that may include genetic uncertainty. Organismal Regulation: The organism specifies its cellular needs (bottom red) for the cell to act upon. It converts the comparison of proteins with organismal needs into metabolic agents. The organism then defines its cellular needs (top red). It employs metabolic effects to alter the extra-cellular matrix and signal other needs. Cellular Regulation: Within the bounds of a cell's membrane, cellular needs transmission (top blue) directs the cell in various ways, including proliferation, differentiation, and programmed cell death. It uses such factors as receptors and enzymes to yield molecular messengers. In the cell's nucleus, chromatin remodeling (bottom blue) then rearranges DNA accessibility by uncoiling supercoiled DNA and introducing transcription factors. Transcription: Transcription (left green) DNA serves as the template for RNAs, both regulatory sequences and pre-messenger RNAs. It transcribes polymerases and binding partners into heterogeneous nuclear RNAs. Pre-messenger RNAs then undergo highly regulated splicing and processing (right green). They turn pre-messenger RNAs into mature messenger RNAs. Translation: Within the cytoplasm, messenger RNAs and ribosomes translate 2D-unfolded proteins (left magenta). Secondary structuring and thermodynamic energy (right magenta) then enable physical formations that complete the process with folded proteins and oligonucleotides. Postprocessing: Again within the cytoplasm, tertiary structuring and modification (top aqua) use assemblers, modifiers and protein subunits to supply regulated proteins. Then feedback regulation (bottom aqua) produces heritable gene expression from small RNAs, proteins and DNA. The proteins and gene expression, rather than being an endpoint, now begin the whole process over again by signaling other cells, altering and maintaining the genome, and editing RNA transcripts. CELL-BEHAVIOR BRIDGE [14][0890-3670-050620-20-1-3.gif] Model for the Cellular Cycle Helen M. Blau was a keynote speaker at the recent UC-Irvine stem-cell symposium in memory of Paul Silverman and Christopher Reeve.^[17]8 She observed: "Where we look and how we look determine what we see." Although only a brief prescription, we now propose an approach to the exploration for uncertainty that involves both where we look and how we look. We examine those cellular-cycle outputs having a relatively high likelihood of diversity and its frequent companion, uncertainty. As an example of exploring for uncertainty in a cellular cycle, consider the following example: Suppose an organismal regulatory program for cellular differentiation might alter the signaling milieu in the extracellular matrix. The signal is internalized by a cell, which might, in turn, alter transcription, produce mature messenger RNAs, produce the 3D-folded proteins, and feed back to alter gene expression for all daughter cells. Now suppose the ECM signaling milieu is altered with a probability p1; the signal is internalized by a cell with a probability p2; transcription will change with a probability p3; mature mRNAs are produced with a probability p4, producing the 3D-folded protein with a probability p5 and altering heritable gene expression with a probability p6. The probabilities p2, p3, p4, p5, and p6 are all conditional on results from the step preceding them, so that the resulting probability of altered heritable gene expression is the product of all of them. Although this probability may be small, is it not preferable to know its form and to later estimate it, than to simply ignore its existence? When we consider all possible stage alterations, the diversity of outputs and complexity of our probability calculations will increase. If we also consider all possible interactions, the diversity of outputs and complexity of probability calculations will increase quite substantially. The implications reach far beyond the regulation of a single cell or organism. Sean B. Carroll of the University of Wisconsin, Madison, summarizes evolutionary developmental biology,^[18]9 invoking Jacques Monod's landmark Chance and Necessity, and the Democritus quote upon which it is based: "Everything existing in the universe is the fruit of chance and necessity." Why wouldn't chance also be included in our observations of biology at the molecular level? We've proposed a brief overview of the "what" and "how" for constructing an uncertainty bridge from genetic determinism and reductionism to actual cellular behavior. We hope and believe it meets the spirit of Paul Silverman's prescient vision, as well as his final wishes. References 1. PH Silverman "Rethinking genetic determinism," The Scientist 18(10): 32-3. [[19]Full Text] May 24, 2004 2. R Strohman "A new paradigm for life: beyond genetic determinism," California Monthly 2001, 111: 4-27. 3. JS Mattick "The hidden genetic program of complex organisms," Sci Am 2004, 291: 60-7. [[20]PubMed Abstract] 4. SJ Lolle et al, "Genome-wide non-mendelian inheritance of extra-genomic information in Arabidopsis," Nature 434: 505-9. [[21]Publisher Full Text] March 24, 2005 5. G Ast "The alternative genome," Sci Am 2005, 292: 58-65. 6. SS Rothman Lessons from the Living Cell: The Limits of Reductionism New York: McGraw-Hill 2001. 7. MHV Van Regenmortel "Reductionism and complexity in molecular biology," EMBO Reports 2004, 5: 1016-20. [[22]PubMed Abstract][[23]Publisher Full Text] 8. HM Blau "Stem-cell scenarios: adult bone-marrow to brain and brawn," Developing Stem-Cell Therapies: A Symposium in Memory of Paul H. Silverman and Christopher Reeve University of California, Irvine October 20, 2004. 9. SB Carroll Endless Forms Most Beautiful New York: W.H. Norton 2005. References 4. mailto:agoodman at uci.edu 5. mailto:bellato at cena.usp.br 6. mailto:lkhidr at uci.edu 14. http://www.the-scientist.com/content/figures/0890-3670-050620-20-1-l3.jpg 15. http://www.the-scientist.com/content/figures/0890-3670-050620-20-1-l3.jpg 16. http://www.the-scientist.com/content/figures/0890-3670-050620-20-1-l3.jpg 19. http://www.the-scientist.com/2004/05/24/32/1 20. http://www.biomedcentral.com/pubmed/15487671 21. http://dx.doi.org/10.1038/nature03380 22. http://www.biomedcentral.com/pubmed/15520799 23. http://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&cmd=prlinks&retmode=ref&id=15520799 From checker at panix.com Fri Jul 1 17:37:44 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:37:44 -0400 (EDT) Subject: [Paleopsych] Atlantic: Benjamin M. Friedman: Meltdown: A Case Study Message-ID: Benjamin M. Friedman: Meltdown: A Case Study The Altantic, 5.7-8 [First, the summary from CHE: http://chronicle.com/prm/daily/2005/06/2005062401j.htm Friday, June 24, 2005 A glance at the July/August issue of The Atlantic Monthly: How hard times put democratic values at risk America's democratic values could be at risk if it experiences an extended economic downturn, writes Benjamin M. Friedman, an economics professor at Harvard University. History shows that intolerance and repression often accompany economic decline, he writes. While the "most familiar example is the rise of Nazism in Germany, following that country's economic chaos in the 1920s" and the worldwide Great Depression of the 1930s, there are many instances in American history in which "declining incomes over an extended period have undermined the nation's tolerance and threatened citizens' freedoms," he notes. Take the Populist Era of the 1880s and 1890s, for instance. As the economy faltered and wages fell, racism and anti-Semitism spread, and the government passed laws to keep out immigrants and to segregate blacks from whites, Mr. Friedman writes. In the 1920s, when "slow growth together with widening inequality halted improvements in living standards for many Americans," the "upshot was the revival of the Ku Klux Klan, the tightest and most discriminatory immigration restrictions in the nation's history, and the elimination of both federal and state laws designed to protect women and children," he writes. Economic prosperity "is in many ways the wellspring from which democracy and civil society flow," Mr. Friedman argues. "We should be fully cognizant," he concludes, "of the risks to our values and liberties if that nourishing source runs dry." The article, "Meltdown: A Case Study," is drawn from Mr. Friedman's forthcoming book, The Moral Consequences of Economic Growth, to be published by Knopf in October. The article is available online at [54]http://www.theatlantic.com/doc/200507/friedman --Gabriela Montell ----------------- Benjamin M. Friedman: Meltdown: A Case Study The Altantic, 5.7-8 What America a century ago can teach us about the moral consequences of economic decline Would it really be so bad if living standards in the United States stagnated ? or even declined somewhat ? for a decade or two? It might well be worse than most people imagine. History suggests that the quality of our democracy ? more fundamentally, the moral character of American society ? would be at risk if we experienced a many-year downturn. As the distinguished economic historian Alexander Gerschenkron once observed, even a country with a long democratic history can become a "democracy without democrats." Merely being rich is no bar to a society's retreat into rigidity and intolerance once enough of its citizens sense that they are no longer getting ahead. American history includes several episodes in which stagnating or declining incomes over an extended period have undermined the nation's tolerance and threatened citizens' freedoms. One that is especially vivid, and that touched many aspects of American life that remain contentious today, occurred during the Populist era, toward the end of the nineteenth century ? roughly from 1880 through the middle of the 1890s. For a decade and a half after the Civil War, economic growth was largely exuberant, society optimistic, and social progress undeniable. But all that changed over the next fifteen years, beginning with a faltering economy. From 1880 to 1890 Americans' real per capita income grew on average by just 0.4 percent a year (versus almost four percent in the 1870s). Then, after a few strong years at the start of the 1890s, the economy collapsed altogether. A severe banking panic set off a steep downturn, widely known at the time as the Great Depression. By the end of 1893,000 banks and 15,000 other businesses, including several major railroads, were bankrupt. Prices, especially farm prices, had been falling even when the economy was growing strongly. Now the declines became ruinous. Wheat dropped from an average price of $1.12 a bushel in the early 1870s to fifty cents or less in the mid-1890s, and corn went from forty-eight cents a bushel to twenty-one. By the early 1890s farmers in some western states were burning (heir nearly worthless corn for fuel. By 1895 per capita income had fallen below the level it had reached fifteen years earlier. Popular discontent followed economic distress. In 1892 labor action against the Carnegie Steel plant in Homestead, Pennsylvania, sparked an armed battle between striking workers and company-hired Pinkerton forces, leaving sixteen dead and more than ISO wounded. Two years later a strike against the Pullman Sleeping Car Company led President Grover Cleveland to call in the Army to protect the railroads. At the same time, hundreds of unemployed men, led by Ohio businessman Jacob Coxey (the group was known as "Coxey's Army"), marched on Washington to demand federal assistance. Altogether, during the course of 1894 seventeen such "industrial armies" marched on the capital. But economic concerns did not manifest themselves only, or even primarily, in labor marches and job riots; they soured many aspects of American society. As wages fell and unemployment rose, fearful citizens sought to close the country to newcomers ? particularly from areas other than northwestern Europe. The new Statue of Liberty (completed in 1886) may have proclaimed America's welcome to the world's "huddled masses" and "wretched refuse," but such popular magazines of the day as Harper's and The Atlantic Monthly were full of ethnic jokes and slurs. Beginning in the 1880s hard times catalyzed a movement to tighten immigration standards. In 1882, after riots protesting the use of Chinese labor for railroad construction, Congress barred Chinese immigrants entirely. All other immigrants were subject to a head tax. Some states adopted legislation prohibiting certain noncitizens from acquiring land. Race relations also deteriorated. In a spectacularly unfortunate coincidence that would affect American history for decades, this period of economic stagnation ? the worst up to that time ? set in just as Reconstruction ended and the federal government finally withdrew its troops from the defeated southern states. No one will ever know whether the country's race relations, both in the South and elsewhere, would have taken a different course had America enjoyed robust economic growth during this period. In the event, the result was segregation by race in practically every aspect of daily life, together with appalling racial violence. One reason for believing that economic frustrations contributed to the sad history that followed is that although the former Confederate states regained full political independence with the end of Reconstruction, in 1879, most of them did not begin to adopt what in time became pervasive "Jim Crow" laws until the 1890s. By the end of that decade most southern states had made it illegal for blacks to ride with whites in railroad cars, and some had also segregated city streetcars and railroad-station waiting rooms. The devices used to deny most black citizens their voting rights ? property and literacy requirements, poll taxes, and white-only primaries ? were likewise adopted mostly in the 1890s or after. But the legal changes enacted during this period barely capture the racist and anti-immigrant (and anti-Catholic, anti-Semitic, anti-ethnic) sentiment of the time. The 1880s saw a rise in vigilante violence in rural areas ? not only lynchings in the former Confederacy but also beatings, murders, and arson by such groups as the Bald Knobbers, in the Ozarks, and the White Caps, in Kentucky and elsewhere. Such colorful populist figures as "Pitchfork" Ben Tillman, who served as governor of South Carolina from 1890 to 1894 and then as a U.S. senator, and Tom Watson, a widely read newspaperman who ran for vice president on the Populist ticket in 1896, were outspoken white supremacists. Tillman publicly defended lynching, called for the repeal of the Fifteenth Amendment (which had given the vote to blacks), and advocated the use of force to disenfranchise blacks in the meantime. Watson's speeches and editorials were regularly devoted to sensational attacks on blacks. Catholics, Jews, and foreigners. The American Protective Association, an anti-Catholic organization founded in Iowa in 1887, spread rapidly once the 1893 depression began, and claimed to have 2.5 million members nationwide by the mid-1890s. Anti-Semitic propaganda was so common among Populists by 1896 that William Jennings Bryan felt obliged to disavow it during his campaign for the presidency. Steps that would have made America more democratic were not without advocates during this period. Many Populists favored such measures as direct primaries and the popular election of U.S. senators. Some also favored women's suffrage. Bryan was a tireless advocate for all these causes. Yet none of them advanced in the face of prolonged economic stagnation. Meanwhile, the Supreme Court only made matters worse. In two key decisions it effectively gutted the Civil Rights Act passed in 1875 (when economic growth was strong), declaring private racial segregation and then segregation legislated by the states to be constitutionally protected. Throughout the Populist era America's media, politics, and legislation all lent support to cultural exclusion, societal rigidity, and efforts to turn back the clock. These ultimately proved futile, hut for a while they poisoned both politics and society. Openness toward the future, faith in a better society for all, and support for the rights of minorities were simply not the order of the day. Economic weakness does not always produce social regress, of course: history is not so deterministic. The depression of the 1930s led, for the most part, to a reaffirmation of America's openness and generosity. But that was atypical; the Populist era was more the norm. When slow growth together with widening inequality halted improvements in living standards for many Americans in the 1920s, the upshot was the revival of the Ku Klux Klan (not just in the South ? at the Klan's peak perhaps one in ten white Protestant U.S. men was a member), the tightest and most discriminatory immigration restrictions in the nation's history, and the elimination of both federal and state laws designed to protect women and children. Similar economic conditions in the 1970s and 1980s provided the backdrop for another round of anti-immigrant agitation, the rise of the right-wing militia movement, and incidents of politically motivated domestic terrorism. Not just in America but in the other Western democracies, too, history is replete with instances in which a turn away from openness and tolerance, often accompanied by a weakening of democratic institutions, has followed economic stagnation. The most familiar example is the rise of Nazism in Germany, following that country's economic chaos in the 1920s and then the onset of worldwide depression in the early 1930s. But in Britain such nasty episodes as the repression of the suffragette movement under Asquith, the breaking of Lloyd George's promises to the returning World War I veterans, and the bloody Fascist riots in London's East End all occurred under severe economic distress. So did the ascension of the extremist Boulangist movement in late-nineteenth-century France, and the Action Fran?aise movement after World War 1. Conversely, in both America and Europe fairness and tolerance have increased, and democratic institutions have strengthened, mostly when the average citizen's standard of living has been rising. The reason is not hard to understand. When their living standards arc rising, people do not view themselves, their fellow citizens, and their society as a whole the way they do when those standards are stagnant or falling. They are more trusting, more inclusive, and more open to change when they view their future prospects and their children's with confidence rather than anxiety or fear. Economic growth is not merely the enabler of higher consumption; it is in many ways the wellspring from which democracy and civil society flow. We should be fully cognizant of the risks 10 our values and liberties if that nourishing source runs dry. History is replete with instances in which a turn away from openness and tolerance, often accompanied by a weakening of democratic institutions, has followed economic stagnation. This article is drawn from his forthcoming book, The Moral Consequences of Economic Growth, to he published by Knopf in October. ~~~~~~~~ By Benjamin M. Friedman, Professor of economics at Harvard From checker at panix.com Fri Jul 1 17:37:56 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:37:56 -0400 (EDT) Subject: [Paleopsych] Stephen Kershnar: Giving Capitalists Their Due Message-ID: Stephen Kershnar: Giving Capitalists Their Due Economics and Philosophy (2005), 21:65?87 Cambridge University Press SUNY-Fredonia Abstract In general, capitalists deserve profits and losses for their contribution to the general welfare. Market imperfections and the range of permissible prices (at least within the boundaries of exploitation) prevent the alignment from being a direct one, but the connection generally holds. In the context of the market, this thesis preserves the central place of moral responsibility in moral desert. It also satisfies the fittingness and proportionality conditions of moral desert and provides a backward-looking and pre-institutional ground of it. In addition, the focus on contribution unifies several different types of act-based desert, specifically deserved profits and losses, deserved punishment, and deserved wages. Hence, to the extent that desert-satisfaction is relevant in the selection of an economic system, this result strengthens the case for capitalism. 1. INTRODUCTION Recent discussion has focused on the notion that desert-satisfaction is a factor that determines the value of states of affairs.1 If this is correct, then an assessment of the value of capitalism will depend at least in part on the degree to which the free market satisfies desert. In this paper, I will argue for two theses. First, allowing capitalists to keep their profits and losses will in general satisfy desert to a greater degree than will other arrangements (e.g., socialism). Second, the first thesis is the result of the capitalist's contribution to the welfare of others. These theses do not justify the market if one is not a consequentialist (or a non-consequentialist who emphasizes desert) since the goodness of the free market will not directly support its being permissible. Before beginning, it is worth setting out a few notions related to capitalists' profits. The market is a system whereby production and distribution is the result of the voluntary exchange or gift by individual persons or groups. A non-market system is one in which production and distribution does not occur via the market. Socialism is a system whereby the government determines production and distribution and is the primary example of a non-market system. A capitalist is one who has two functions. First, he organizes production. That is, he chooses what, when, and how much to produce. Second, he owns the business's assets. The two combine to explain the central function of a capitalist: control over production. For example, even if the capitalist hires persons to organize production, the capitalist will delegate and approve (at some level) the actions of the organizing employee. In a market, capitalists receive a return, which is the amount of income minus costs. The costs depend on the use of productive resources such as natural resources, labor, and capital. The return is either profit or loss depending on whether the difference is positive or negative. The profits and losses can be either individual or collective depending on the way in which persons coordinate their capitalist activities. A person's well-being is the extent to which her life goes better. I shall assume but not defend the view that how well a person's life goes is a function of two factors.2 The first factor is the degree to which she experiences pleasure or desire-fulfillment. The focus on desire-fulfillment rests on the notion that various desired experiences do not have any phenomenal feature in common, e.g., appreciating a philosophical argument and eating cupcakes do not have any shared experiential features. The second factor is the degree to which she has access to various objective-list elements. Objective-list elements are things that make a person's life go better independent of whether they bring her more pleasure or result in greater desire-satisfaction. This includes things such as knowledge, agency, meaningful relationships, and virtue. My argument has three parts. In the first part, I set out an account of desert and of the desert-based criteria by which to judge capitalist profits. In the second part, I argue that capitalist profits are in general deserved. The argument begins by examining the case in the context of a perfect market and then considers the claim that market imperfections make the connection between profit and desert less direct but still reliable. In the third part, I respond to objections. 2. THE NATURE OF DESERT Desert is a type of value. I shall assume that it has the following structure: 1. A person, S, deserves benefit or cost X if and only if 1. other things being equal, it is intrinsically better that S receive X than that he not receive it, 2. S does action or has character A, and 3. (a) in virtue of (b). There might also be a causal condition (d). 1. (b) causes S to receive X. An area that illustrates this last condition is poetic justice. One can imagine a case where a person commits a battery and gets an appropriate punishment for battery but for one that he did not commit. In this case (a) through (c) are satisfied but it is not clear that the criminal gets the punishment that he deserves. The notion that desert concerns the good rather than the right has several appealing features. First, it intuitively seems that the issue of what persons deserve is distinct from the issue of who has a duty to give it to them. This is particularly attractive where desert rests on such things as virtue or interactions with persons who are now deceased. Second, viewing desert-satisfaction as a determinant of the good allows us to capture our intuitions about the relative value of certain states of affairs. In particular, it allows us to say that other things being equal, the world is a better place when the virtuous get pleasure and the vicious pain than vice versa. Again, these intuitions are appealing regardless of whether we view someone as having a duty to redirect pleasure to the virtuous. Third, viewing desert in terms of the good allows us to distinguish desert from rights, where the latter are relations in which interpersonal duties occupy a central role. Other conditions on desert are worthy of note. First, the act or character in question is something for which the agent is morally responsible. This is not obvious because we sometimes say that a person deserves compensation for an injury, where she is not responsible for receiving that injury. We also say things such as ?all children deserve an education,? where the children didn't do anything that warrants an education.3 If this broader sense of desert is correct, then the desert theorist will have a hard time explaining what distinguishes desert from other intrinsic-value judgments. In any case, I shall confine desert to intrinsic-value judgments in which the agent is morally responsible for the ground of value. If one thinks that this is unduly narrow, then she should substitute ?responsibility-type desert? where I have written ?desert.? Second, there must be an alignment between the object of desert, X, and the ground, A. In particular, if the ground, A , is morally permissible or morally good (depending on whether it is an act or character trait), then the object of desert is good for the person. The converse also holds. This feature relates institutions to desert. An institution is a rule-governed social activity.4 The goodness of an action (and hence the relevant property of it) might causally come about because of the role of institutions but it does not make essential reference to the institution. The bad- or wrong-making feature (e.g., malicious motive or duty infringement) might come about because of the person's intentional action in the context of various social conventions, but the violation of the social convention by itself doesn't explain why the act itself is wrongful. Since the relevant properties of the ground and object of desert are not institutional, desert is pre-institutional (i.e., conceptually, but not causally, independent of institutions). Moral desert is also not a forward-looking justification. That is, a person's desert rests on some past or present act, rather than on the consequences of her receiving the object. This in part explains two fundamental features of desert. First, the object of desert must fit the ground of it. That is, the object must be the right sort of thing. For example, a person who wins a track meet because he is the best runner does not on that basis deserve high grades in his academic courses. Second, the object of desert must be proportionate to the ground.5 For example, a person who does a kind act for a neighbor does not deserve an eternity in heaven for it. These features are inferred from our intuitions. It is also worth noting that, on some accounts, moral desert can be incorporated into forward-looking theories of the right. Desert is backward-looking because it makes essential reference to the past or present. However, on these accounts, desert-grounded duties might be forward-looking. This is because desert can be incorporated into the consequentialist duty to bring about the best state of affairs in the future since desert in part determines the value of a state of affairs. Thus, on these accounts, while desert is a backward-looking determinant of the good, it is compatible with forward-looking theories of the right. It might be objected that a theory of the right is backward-looking if it requires persons to bring about outcomes because they stand in a particular relation to the past. This objection raises difficult issues as to whether the bearer of value in such a theory is merely a future state of affairs, a relation between the past and future states of affairs, or both. If this objection is sound, then desert can still be incorporated into a maximization ethic even if it rules out forward-looking theories of the right. If these assumptions about moral desert are correct, then it has the following properties. It must be pre-institutional and not forward-looking. It must have the right structure and also satisfy both fit and proportionality requirements. I shall argue that capitalist profit satisfies these conditions. 3. CAPITALISTS DESERVE THEIR PROFITS AND COSTS 3.1 Thesis In this section I shall argue that in general capitalists deserve profits and losses and that this desert is grounded in their contribution to the general welfare. Here general welfare is a function (e.g., total or average) of individuals' welfare. The desert-satisfaction generally tracks profits and losses rather than directly tracking them for two reasons. First, profit and loss will not strictly track contribution to the general welfare. This is in part because of market imperfections, e.g., imperfect information and transaction costs, that lessen the degree to which resources satisfy demand. This is also in part because what satisfies demand will not necessarily enhance the general welfare ? either because the demander's practical reasoning (e.g., about her or others' interests) is defective or because the content and strength of persons' desires do not track the outcome of this practical reasoning.6 The latter misalignment creates room for akrasia.7 Second, in some isolated transactions there is a range of profits due to the arbitrary point by which persons divide the benefits of a transaction. In this case, each individual's profit need not track the contribution to overall welfare even if the aggregate profit does so. 3.2 Assumptions Before proceeding, it is worth laying out my undefended assumptions about the market. First, I assume that the market is just (i.e., it respects persons' moral rights). This might rest on claims about the value of non-coercive interference, efficiency, or both. This assumption is a weak one for it need not deny that a welfare-state supplement to the market is just or permissible. The latter might also be just, for example, if persons voluntarily consent to it by choosing to live within a particular country. Second, I assume that as a general matter current private property rights are morally permissible. This assumption rests on the value of autonomy, which is relevant since autonomy requires the use of objects.8 It also involves some mechanism by which claims of the distant past are cut off.9 This assumption is relevant because on some accounts, the distribution of private property rights might affect the distribution of wealth but not overall productive decisions. This might come about since the more efficient producer will in theory use a particular resource, but a wealth transfer might still be necessary where the more efficient producer is not the party with the legal right to the resource.10 One objection is that the first two assumptions are unnecessary. The concern here is that the thesis of the paper is that capitalism is just because it tracks desert. But if this is the case, then there is no need to assume that the market is just because it respects rights. The objector might assert that the second assumption is similarly superfluous. He might say that if I am defending capitalism, then I am defending an account of private property rights. This objection misconstrues my argument, which is for the value of capitalism, not its justness. It is not obvious that desert is a feature of justice, especially if the former is an element of the good and the latter an element of the right. This is not to say that the justice of the background institution of property and the value of capitalism are completely independent of one another. The justice of capitalist property rights is conceptually prior to the value of capitalism since the distribution of moral rights affects what individuals deserve. To see this priority consider a scenario where two persons transfer the control of cars from lone individuals who are not using them to poor families that desperately need them. The first person gets the car by repossessing it when the owner stops making payments (and the sale contract allowed for repossession); the second steals it. Here, the former act grounds positive desert, the second likely does not. In this case, the distribution of moral rights in part determines the value of the acts by affecting what people deserve. Third, I assume that price is, at least within the boundaries of exploitation, a moral-free zone.11 That is, I assume that there is no morally privileged way to distribute a transactional surplus. This assumption allows us to avoid concerns about whether a capitalist receives a disproportionate share of the transactional benefits.12 Such an account rests in part on the notion of a reservation value. This is a value at which a party's gain from the transaction is equal to her best alternative to a negotiated agreement. The reservation value is then used to set the reservation price. This is the minimum price she is willing to accept to enter into the agreement. The space between the parties' reservation prices constitutes the bargaining range. Any outcome within the bargaining range involves a transactional surplus that is the difference between the buyer's and seller's reservation prices. This surplus is the gain by the two parties from the contract. A reservation price may be judged as the party's actual reservation price (determined by a counterfactual about the minimum price that a party would accept) or a moral reservation price (the minimum price that a party ought to accept). The latter may take into account the subjective preferences of the relevant party but it is not fully subjective since the party may be mistaken about what is in her interest. The claim of exploitation in a mutually beneficial transaction involves the stronger party using the vulnerability of the weaker party to take a disproportionate share of the transactional surplus relative to each party's moral reservation price. If we set the level of vulnerability as involving a significant threat to a person's or their loved one's overall well-being, then exploitative transactions under current conditions will likely be rare. Fourth, I will assume that, in general, economic value tracks persons' well-being, i.e., the amount of money spent for a particular good generally tracks the contribution of that good to a person's well-being. This relies on the notion that persons have features such as transitive ordered preferences and the ability to select appropriate means to their ends and that such features explain their spending preferences. This generalization is defeated to the extent that aggregate demand does not track well-being due to market imperfections, or if consumers fail to properly reason about what promotes well-being, or if there is a widespread misalignment between the conclusion of such reasoning and persons' desires. With these assumptions in mind, I set out the notion that in the actual market profits track contribution. 3.3 Profits in the market track contribution The ideal market is one that is characterized by absence of externalities, zero transaction and information costs, full information, perfect competition, and rational individuals.13 These characteristics lead to the maximally efficient use of resources. Under such a system, there will be no entrepreneurial profits since the costless shifting of resources eliminates any inefficient malallocation. The capitalist will still make a profit on his capital that is equal to its marginal value. When we move to the actual market, these characteristics no longer hold. For example, there are real and in fact quite weighty transaction costs. Nevertheless the market will over time move successively closer to the ideal market since it will adopt increasingly lowered costs of production and distribution as resources will be shifted to the most efficient producers and distributors. The actual market will be more efficient than a non-market since it allows for greater specialization in the use of information and better harnesses self-interested motivation. In the actual market, the capitalist receives profits both from a return on capital and from his entrepreneurial activity. Entrepreneurial activity results from the reorganization of production. This second type of profit results because costs decrease but income stays largely fixed. The increased profit is thus the result of earlier malallocation of the factors of production. This second type of profit will decrease over time as competitors will adopt the innovations in production. On this account, the increased profit will in general track the contribution to the general welfare since the size of the additional profit is proportional to the gains in well-being that result from the correction of malallocated resources. The idea behind the claim that increased profit will in general track the contribution to general welfare rests on the following general assumptions. First, entrepreneurial profits result from innovations that decrease production and distribution costs. Second, decreased production and distribution costs increase aggregate preference satisfaction, since buyers can now buy the same or better goods for less. Third, increased aggregate preference satisfaction tracks increased welfare, since persons prefer things that increase their welfare. The value of the correction of malallocated resources is a function of some factors that are not under the producer's control, e.g., the nature of consumer demand and the productive threat of competitors. A concern is what the capitalist contributes when it is the workers who actually do the work of changing the workplace. The capitalist's ownership of workplace assets makes changes in the means of production in part attributable to him. Production systems that do not use the market will be less able to reward persons' contribution to the overall welfare. This is because without the market it becomes extraordinarily difficult to measure contribution. The market allows us to gauge the nature and strength of different consumer demands. Without such a mechanism, sales patterns are more likely to reflect governments' rather than consumers' choices as to products and prices, thus blocking any inference about the collective welfare. An objector might claim that workers or consumers can know which citizens have contributed without paying for it. They might cite the case of baseball teams that know which of their players have contributed without having to know what they get paid. The objector might conclude that non-market production systems are as capable of rewarding persons' contributions as well as market systems. The issue is not whether non-market systems are as capable of rewarding contribution as market systems, but whether they will do so. My claim is that prices are part of a system that provides for the strongest incentives to gather and analyze the relevant data. The baseball-team analogy supports my contention. The value of a player is the result of numerous factors including the value of his likely replacement and the player's value to competitor teams. Measuring these values requires sophisticated statistical studies that would probably not have been developed or applied without a competitive market for the information, a market that in turn depends on a competitive market for players. This connection between measuring contribution and prices becomes even more important for production that does not have the same type of fan interest. For example, the communication of the contribution of raw cotton to well-being when comparing its use in underwear versus towels is unlikely to be accurately measured unless persons have the incentive to be concerned with this particular comparison. 3.4 Argument No. 1: The contribution model satisfies the desert criteria The capitalist receipt of profits and losses on the basis of contribution aligns with the general pattern of desert. First, profit/loss receipt is preinstitutional (conceptually, but not causally, independent of institutions). The productive contribution to the well-being of others is not a fact that makes essential reference to institutions. The market in which this occurs may, although this is not clear, make essential reference to an institution but this merely serves as the context in which the contribution occurs. Second, the contribution is a past or current act and hence does not provide a forward-looking justification for capitalists keeping their profits and losses. Third, deserved profits and losses have the right structure for desert. They involve an in-virtue-of justification of treatment that is claimed to make the world a better place than its absence. Fourth, the capitalist keeping his profits and losses satisfies the fit and proportionality requirements. It intuitively seems appropriate that a person who creates economic value for others should receive economic value himself. The fit criterion is one that is a function of intuition rather than abstract argument. For example, consider the fittingness element of punishing culpable wrongdoers, giving good students high grades, and giving the fastest runner the prize. The consequentialist gains can in part account for these results, but they cannot justify the fittingness relation because of their forward-looking nature. Allowing capitalists to keep their profits and losses satisfies the proportionality requirement since there is a general correlation between profits (and losses) and the contribution to the well-being of others. The link to contribution also unifies the different elements (i.e., moral responsibility and the fittingness condition). Contribution as a ground of desert preserves the essential link to the agent's moral responsibility since it focuses on outcomes, which are closely linked to practical reasoning. This is because practical reasoning, at least on some accounts, has an intention as its conclusion and intentions often, if not always, refer to outcomes the agent wants to bring about. This close relation to practical reasoning is significant since it is practical reasoning that lies at the heart of responsibility. This brings in the fittingness since in the context of capitalist activity it is about the creation of economic value (although not necessarily for others) that the capitalist reasons. Entrepreneurial losses generally result from a misallocation of resources and where so caused reflect a diminishment of general welfare. There is an issue of whether all failures to use resources to contribute to the general welfare constitute such a misallocation given opportunity costs. I leave aside this issue since nothing in this paper rests on this claim. If contribution to the general welfare makes it desirable that the capitalist receive wealth, then a symmetrical account of desert would suggest that diminishment of welfare makes it desirable that she lose wealth. 3.5 Argument No. 2: The contribution model fits into a deeper explanation of desert The link to contribution also unifies the different types of act-based desert and relates it to character-based desert and in so doing provides the deeper explanation for desert. Through a person's responsible actions, he connects himself to certain values. For example, a person who culpably performs a rape connects himself to certain traits, e.g., cruelty, on which supervene value or disvalue, e.g., badness. Thus, our treatment of a person reflects the fact that he has responsibly connected himself to certain values. It is in part through the bringing about of outcomes that this connection occurs. A similar thing is true of character-based desert. One likely explanation of this unity is that character-based desert rests in the end on the responsible actions whereby a person formed his character in certain ways. The practical reasoning whereby he did this did not focus on the effects of his character but rather on particular acts. Nevertheless, in deciding to do certain acts the agent formed himself in certain ways thereby strengthening his connection to certain values. 3.6 Argument No. 3: The contribution model fits with two other areas of act-based desert I shall argue that the capitalist receipt of her profits and losses is deserved for its contribution to the overall well-being. I begin by noting that we intuitively hold that contribution is an appropriate ground of desert in two other areas: punishment and wages. 3.6.1 Deserved punishment. Punishment is deserved for the culpable unjust harm done on others. The harm factor explains why we think that persons who cause greater harm (e.g., murder and rape) deserve more punishment than do those who cause less harm (e.g., theft and battery). Harm can be interpreted as the wrongdoer's contribution to the unjust diminishment of another's well-being. Some of the other factors like culpability can't produce our intuitive sense of the proportionality of deserved punishment. Other factors, e.g., the utility produced by a punishment, cannot act as the ground since they are forward-looking. It might be thought that it is the risk of harm, not the actual harm done that could equally account for my intuitions.14 The risk of harm here is the product of its probability and magnitude. The problem with this suggestion is that likely harm is relevant only in so far as it relates to culpability, which suggests that this is just a restatement of the culpability condition. To see this, consider cases where persons pose a great risk of harm but where they are unaware of this risk. For example, imagine that persons who steal license plates generally sell them to underground garages. Unbeknownst to the license-plate thieves, the garages sell these license plates to gangs who then use them to get away with drive-by shootings. For reasons of police investigation, this fact is never publicized. It seems that the license-plate thieves do not deserve a severe punishment because even though they impose a great risk of harm on others, they are not culpable for doing so.15 The amount of deserved punishment involves such factors as the wrongdoer's culpability, the degree of injustice and, most significantly for my purposes, the harm that the wrongdoer brought about. 3.6.2 Deserved wages. The most plausible candidates for the ground of deserved wages are each party's sacrifice, hard work, or contribution. Deserved wages do not rest on a person's sacrifice. Persons with greater ability often can do tasks with considerably less sacrifice than those with lower ability. This is especially true where the cause of the greater ability is something that the more able persons enjoyed developing or which is largely the result of causal factors that did not require sacrifice, e.g., superior genetic endowment. If persons with greater ability are more able than others to accomplish the same type of tasks then it seems that they have sacrificed less to bring about the result. And if a deserved distribution tracks each individual's sacrifice, then the more able persons ought, as a matter of desert, to be given a smaller share of the social surplus. This seems counterintuitive. For example, if two builders do the same task for the same client but one is able to do it more efficiently than the other due to his greater abilities, it does not seem to be a demand of desert that the less efficient builder be paid more. A similar pattern occurs with regard to desert viewed in terms of hard work. For example, consider a case in which master and apprentice create a great work of art together and the apprentice ends up doing most of the legwork (e.g., more total brushstrokes), thereby putting in the greater effort. It intuitively seems that the master deserves a greater share of the rewards since his contribution (e.g., the idea or vision) is greater. If desert tracks hard work and if persons with greater ability need not work as hard to complete the same task as persons of lesser ability, viewing desert in terms of hard work will produce the same counterintuitive pattern of results as does the focus on sacrifice.16 Viewing deserved wages in terms of proportionality to each participant's objective contribution has several advantages. First, it gets around the problem of discrimination against persons of greater ability that characterized the sacrifice- and desert-based analyses of fairness. Second, this account tracks our intuitions with regard to products produced as a result of the efforts of multiple persons. Here it seems that the fitting reward for each person's participation is proportional to her contribution. This is why we often think it deserved that persons who work longer hours receive more pay but think differently where one person's labor requires considerably greater skills than the others. Consider a case in which two workers do a job. The first puts in 10 hours using a backhoe while the second puts in 50 hours using a shovel. Because of the greater efficiency of the backhoe, the first accomplishes ten times as much. It does not seem undeserved for the first to get paid considerably more. Third, this account is compatible with different accounts of value and models of contribution that might be used to fill out claims of fairness. In particular, the account is neutral with regard to whether the contribution of each person's labor is a function of its marginal product and whether the value of a contribution is a function of the socially necessary labor time that brought it about.17 My general point about the relation of contribution to deserved wages could be accepted while rejecting one or both of these ideas. Hence, in the context of wages, desert tracks each party's contribution. The focus on contribution unifies the different types of act-based desert, e.g., deserved punishment, wages, and profit, and unifies the act- and character-based desert-types. It unifies the types of act-based desert by viewing desert as a fitting response to values adopted when the agent acts to change in the world. In both cases, we respond to the values to which a person has connected himself. 3.6.3 Desert and effort. The most powerful objection to this argument focuses on the relation of desert to moral responsibility. The objector argues that since the ground of moral desert is something for which the agent is morally responsible, the wage a worker deserves depends on some factor that is substantially within her control. The objector continues that since a worker has substantial control over her efforts and not over her contribution or sacrifice, the former is the ground of desert. The objection thus rests on the following three propositions: 1. If something grounds moral desert, then it is something for which the agent is morally responsible. 2. If the agent is morally responsible for something, then she has substantial control over it. 3. The agent has substantial control over, and only over, her efforts.18 One problem with this argument is that it is not clear why the agent doesn't have substantial control over contribution. She may not have as much control as she has over her effort, but this is not required for something to ground desert. It should be noted that the worker has control over the production of certain objects, but not over whether these objects satisfy others' preferences. This is analogous to how a wrongdoer deserves punishment for doing actions that infringe on the rights of others in part where such actions cause the victim to suffer even though the wrongdoer does not control whether the acts bring about another's suffering. A second problem with effort theory arises when we try to identify the scope of effort that grounds deserved wages. For example, consider where one middle-aged worker, Al, has worked much harder than a second, Bob, to develop specific abilities or more disciplined work habits, and as a result is much more efficient. As a result, the two put in the same effort but Al is far more productive. Intuitively, Al seems deserving of higher wages. But if we count the efforts that a worker puts toward developing specific abilities or general abilities (e.g., discipline), then we end up looking at effort that goes far beyond the workplace. For example, Al's discipline might have been acquired in the college weight room. This broader scope breaks the link between the desert ground and moral responsibility. This is because in many cases the decision to develop or not develop general habits, and perhaps some specific abilities, is made where the agent lacks sufficient information about the relevance of these abilities to future jobs. For example, a player's efforts in learning the intricacies of basketball are often made without considering how this might influence his future employment as a talent scout. A third problem with effort-theory is that the agent's efficiency is a function of her planning and her monitoring and adjusting her effort in response to feedback.19 These features are often the result of imagination, intelligence, and other mental events and capacities that do not result from conscious decision. Even if persons can plan or plan to plan, they often can't plan to plan to plan. A dilemma then arises as to whether these mental events (e.g., planning) ground deserved wages. If they do but do not result from conscious decision, then the desert ground is disconnected from moral responsibility. If they do not ground desert but do affect production, then some seemingly relevant effort will not ground desert. Also, on this second horn, wasteful and inefficient effort will generate as much desert as well-planned and well-executed effort. This is counterintuitive. This counterintuitive result persists even where we discount the failure to plan, monitor, or adjust by persons' lower capacities. 3.7 Conclusion In general, capitalists deserve profits and losses for their contribution to the general welfare. Market and demand imperfections and the moral-free nature of prices (at least within the boundaries of exploitation) prevent the alignment from being a direct one, but the connection generally holds. The role of contribution preserves the central role of moral responsibility by placing value on the object of practical reasoning. It satisfies the fittingness and proportionality relations. It also provides a backward-looking and pre-institutional ground of moral desert. In addition, the thesis fits into a deeper explanation of desert in terms of persons connecting themselves to values. The contribution thesis also unifies several different types of act-based desert, specifically deserved profits and losses, deserved punishment, and deserved wages. I now turn to some of the objections to the thesis that capitalists deserve their profits and losses based on their contribution to collective well-being. 4. OBJECTIONS There are roughly three different types of objections that are raised to my thesis. First, capitalists do deserve profits and losses but not on the basis of their contribution to the general welfare. Second, persons do have moral desert but capitalists do not deserve profits and losses. Third, persons do not have moral desert. I shall address the first two, the third takes us too far afield. 4.1 Capitalists deserve profits but not on the basis of their contribution to general welfare On some accounts, deserved profits rest on things other than contribution. These theories fit into two types of grounds: forward-looking and backward-looking theories. An influential forward-looking theory is that put forth by N. Scott Arnold.20 Arnold views deserved punishment as a type of institutional desert. Institutional desert consists of those rules that award costs and benefits in a way that best achieves the market's essential goal. He asserts that since the market's essential goal is the efficient distribution of scarce resources, market-based desert consists of the rule or rules that best achieve this goal. On his account, the rule (or a member of the set of maximally efficient rules) that does so involves capitalists keeping their profits and losses. He argues that this reward system is maximally efficient because it provides capitalists with an incentive to produce efficiently, transfers resources to them in a manner proportionate to their past efficiency in reorganizing production, and communicates the efficient means of production to competitors. Hence, on his account, the effects of allowing capitalists to keep their profits and losses makes them deserved. This account has several problems. The most glaring is that it makes the effects of capitalists' receipt of profits and losses rather than their acts the ground of the desert claim. This is not an act-based ground and this severs any connection between moral responsibility and deserved profit. This is problematic to the extent that one thinks that desert necessarily involves a ground for which the agent is morally responsible. Another problem is that it is forward-looking. To the extent that desert is not forward-looking, this approach is mistaken. Arnold is aware of this objection and claims that institutional desert can be forward-looking where the institution in question is justified on forward-looking grounds. This conflicts with our intuitions about desert because it threatens to categorize a large set of forward-looking value judgments as desert judgments. This account also relies on an institutional desert. As mentioned above, this confuses desert whose grounds are institutional with desert whose ground is pre-institutional but whose effects causally involve institutions. It is hard to see how a ground can essentially refer to an institutional property when such properties are themselves neither something for which the agent is morally responsible nor something that is a good- (or right-) making property. The backward-looking theories agree that capitalists deserve their profit but claim that their desert rests on something else such as risk or the postponement of gratification. The notion that risk can ground desert is unconvincing. Intuitively persons do not deserve things just for taking a risk, independent of the nature and degree of risk.21 For example, persons who take extreme risks often seem to deserve little (e.g., persons who play the lottery). Similarly, persons who take risks that offer little benefit to the risk-taker or others also do not intuitively seem to deserve very much (e.g., persons who invest in mining gold in areas where there is little evidence that it occurs). Moreover, the fit element is not met. It is intuitively unclear how taking a risk makes it valuable for a person to receive a benefit or loss. If we link this risk to the contribution to others' welfare the fit does intuitively seem to be met, but it seems to be contribution and not risk that explains the intuitions involved. The postponement of gratification suffers from similar difficulties.22 Postponing gratification does not seem especially valuable; again it seems to be a function of the likely outcome and perhaps reason for the postponement. The fit element is again not met. Postponement of gratification intuitively seems to be neutral since it is not clear why it matters when a person experiences gratification. However, when linked to something like contribution the fit element does appear to be satisfied. Other theories might rest deserved profits and losses on hard work or sacrifice.23 These approaches intuitively fail to satisfy proportionality in other areas since inefficient or mentally slow workers do not seem to have strong deserts based on their hard work or sacrifice. For example, we intuitively think that a talented quarterback (Ken) who leads his team to repeat championships deserves more income for his playing than does a quarterback with mediocre talent who worked as hard as Ken but who could not read defensive alignments as well. In the context of punishment, there is a similar disconnection between hard work and sacrifice and deserved punishment. If we have analogous intuitions with regard to capitalists, and I think we do, then this suggests that these grounds will not satisfy the proportionality feature of desert. George Sher argues to the contrary that hard work grounds desert because the person is investing his time and energy into a project. Since this is in effect an investment of his life, it has value.24 This argument rests on the premise that since a person's life has value, whatever the life is invested in also has value. This is plausible because it appears able to satisfy the fit relation (because the life has value) and proportionality (since the amount of value tracks the amount of the agent's life mixed in). Like the investment of labor, it is not clear that a person can literally invest his life into a project. After all, a life is not an object but an object's duration in time and this seems incapable of being mixed into other physical objects.25 A more figurative account of mixing one's life into a project escapes this metaphysical concern but only by making Sher's position obscure. Assuming then that I am right in arguing that if capitalists deserve their profits and losses, then contribution grounds it, we still need to look at a challenge to the antecedent. 4.2 Persons deserve things but capitalists don't deserve their profits and losses John Christman raises a powerful objection to the idea that contribution grounds capitalist desert.26 Christman notes that in an imperfect free market, profit and loss are the result of consumer demand and the competitive threat posed by other competitors (the proximity and capacity of potential competitors). Christman then argues that profits and losses do not satisfy the proportionality requirement of desert because neither the consumer benefit nor the producer's contribution determines the size of profit.27 Christman provides three main reasons in support of these claims. First, consumer demand and competitive threat determine profit level but are not part of the consumer benefit. Second, since greater competitive threat benefits consumers, the size of profit is in fact inversely related to consumer benefit. Third, the consumer and competitor conditions are not things within the producer's control and hence not part of his contribution. The first reason should be quickly rejected. The proponent of the contribution thesis asserts that the shape and magnitude of consumer demand and competitive threat are the context in which the benefit is provided. He does not assert that it is part of the consumer benefit. This is analogous to the way in which a starting baseball player has more value to his team the less able his backup, but the weakness of the backup is not itself a benefit that the starter provides to his team. Similarly, the amount of misallocation of resources is the context in which the consumer benefit is provided, but it is not itself a benefit. The notion that the profit is inversely related to the benefit provided to consumers also rests on an error. The error occurs since Christman apparently holds the benefit fixed in different competitive environments and then argues that, given this level of benefit, additional profits merely harm the consumers. The problem here is that this is not something a contribution theorist would hold fixed. If one producer vastly improves production efficiency as compared to his competitors, then over time he contributes more to consumers than one who only slightly improves it. If this is correct and if profits track the degree of improvement, and I assume that they do, then profits will track contribution. The third assumption is trickier for it seems to make the contribution theorist allow that the desert ground involves causal factors outside of the agent's control. Every desert theory must make such an allowance, however, at least to some extent. This is because it is logically impossible for agents to deserve (or even control) the traits and conditions that make their actions and character possible. That is, acceptance of the following proposition will make moral desert impossible: If a person does not deserve to have X and X makes Y possible, then that person does not deserve Y. This can be seen in that no one can deserve the ability to exert effort or a life-sustaining environment.28 Such desert would only be possible for a self-caused being and such a being is impossible. However, once we allow the causal basis of desert grounds to include things outside the agent's control, then the door is sufficiently open to allow persons to deserve things based on the contribution to consumers even though this depends in part on contextual factors. This is because what desert requires is merely that the ground of desert is substantially under the agent's control. The choice of what, where, and when to produce meets this test. An objection that might be raised to this line is that the fact that persons have different opportunities to make a capitalist contribution undermines the case for desert.29 The different opportunities are relevant only if moral desert is comparative, either in general or in the economic context. This is not true of other areas of desert. Consider deserved punishment. How much a rapist deserves to suffer for his act is intuitively independent of how much punishment past and future rapists will be given. If, for example, misogynistic judges have given other rapists one-week sentences, this does not mean that such a penalty for future rapists is deserved. A similar thing appears to be true of character-based desert. A virtuous person does not appear to deserve a life of suffering even if this is what has been given to other similarly virtuous persons. Similar intuitions hold in the context of economic desert. If others who contributed greatly to consumers have received very little benefit for their contribution, then it hardly seems deserved for the next great contributor to get little. If this pattern holds, then moral desert is probably not relational. The explanation for this is that fittingness relation is a function of the values a person has connected himself to (and perhaps also the strength of the connection) and this is conceptually independent of others' actions and characters. Another objection is that capitalists don't deserve their profits and losses because desert rests on virtue or vice alone and capitalists' characters vary. The objector might be arguing that the negative desert that accompanies the vicious character of some capitalists overrides the positive desert that accompanies their contribution to others. This, however, is compatible with the conclusion of this paper since there is nothing about the conclusion that prevents different deserts from being combined to produce an overall desert analogous to the way that vector-forces combine to produce a net vector-force. To be interesting, then, the objector must be asserting the following. Strong Character Theory: Moral desert rests on, and only on, an agent's character. There appear to be four main arguments for this theory. One argument behind this notion is that a person is constituted by her character and hence it is what should determine a person's desert. The idea is that what should ground punishment is who we are rather than what we do. One concern with this argument is that desert rests on some factor for which we are responsible and, on a libertarian account of free will, we are fully responsible for who we are only if we chose to be that way.30 This is because on a libertarian account a person's responsibility must rest on factors that are in the end not traceable to factors outside of his control, such as his environment or genetics. On a fundamental level, only an agent's choices are not traceable to environment or genetics. If we view choices as a type of mental act, then this theory thereby asserts that desert fundamentally rests on acts. Hence, this first argument fails to support the Character Theory. A second argument is that the Strong Character Theory explains why we normally focus on acts. We focus on acts because we can't make reliable judgments of a person's character. We further think that acts ground deserved punishment only in so far as they reflect character. This explains why provocation and duress excuse or partially excuse the agent. They excuse because they involve a disconnection between a person's act and his character. The problem with this account is that the excusing effects of provocation and duress can also be explained in terms of their undermining a person's responsibility for his acts. They do so by introducing emotional forces that overcome a person's ability to control his acts (and would do so for an ordinary person). Thus the role of excuses such as provocation and duress does not support the Strong Character Theory. A third argument for the Strong Character Theory is that character is less subject to moral luck (external influences that affect a person) than other factors and hence a more appropriate ground for desert. The problem with this argument is that there is also constitutive moral luck, i.e., moral luck that shapes what character one has.31 This influence is obviously quite strong, which is why we think that it is important when raising children to have healthy environments. If the influence of moral luck prevents a factor from grounding desert, then this undermines the notion that character grounds desert. One might argue that moral luck shapes character formation less than other factors, e.g., acts and attempts. However, it is not clear what argument supports this claim. This is particularly true if we recognize the significant role genetics plays in determining a person's intelligence, personality, and life outcomes.32 It is not clear in what sense genetics plays a similar role in determining whether someone attempts to perform or performs an act, although it may come into play in so far as a person's character explains in part his thoughts and actions. Thus, the problem of moral luck does not provide clear support for the Character Theory. A fourth argument from intuitions does support the Character Theory . Here our intuitions suggest that the world is a better place if virtuous rather than vicious persons receive increased well-being. Consider the following case, involving the allocation of opium. There are two persons with the same level of unhappiness, both of whom have a painful terminal illness. The first is virtuous, although his role as a space explorer has not allowed him to express it by directly benefiting many people. The second is vicious but was unable to express this for the same reason. It intuitively seems that if we have enough opium for only one person (and it can't be divided), then the world is a better place if we give it to the virtuous rather than the vicious person. These thought experiments seem a little hard to imagine since it is hard to believe that very different characters over the life of the relevant individuals didn't translate into a different number of wrongdoings. If one thinks that this difficulty does not undermine our confidence in our intuitions, then this thought experiment and ones like it still do not provide any support for the Strong Character Theory because they do not rule out acts as a ground of desert. The objector might concede that the Strong Character Theory is false, but assert that the only acts that ground desert are ones done from certain motives (e.g., duty or love of humanity). He might assert that capitalist acts are rarely done from such motives. The problem is that some acts that ground positive desert (e.g., benefiting one's children where one identifies their interests with one's own) are done out of self-interest. Also, some acts that ground negative desert (e.g., injuring others in the pursuit of an ideological goal or to promote the interests of one's children) are done out of desirable motives. If this is correct, then act-based desert need not reduce to motive-based desert. This makes sense since that which grounds desert is something for which a person is responsible and it is not clear that persons can select the motive from which they act. In conclusion, then, there is no support for the notion that acts don't ground desert. If this is correct, then in deciding whether capitalists deserve their profits, we have to be concerned with both their character- and act-based desert. The latter includes capitalist contributions to the welfare of others. One last objection that might be raised is that capitalist desert rests on the legitimacy of the capitalists' ownership of the means of production both in general and in the particular ways found in the actual world.33 A defense of this claim will take us far afield but arguments based on either autonomy or consequences are available to support the claim of legitimacy. One last type of objection rests on the denial of moral desert in general. But a general defense of desert is a project that is outside the scope of this essay. Such a defense will likely rely on showing that our considered moral judgments cohere around certain intrinsic ?better-than? judgments in which the moral ground is something for which persons are morally responsible and the existence of moral desert best explains this coherence. I will instead merely stipulate that my conclusion depends on the undefended claim that persons deserve things. 5. CONCLUSION In general, capitalists deserve profits and losses for their contribution to the general welfare. The role of contribution preserves the central role of moral responsibility, satisfies the fittingness and proportionality relations, and provides a backward-looking and pre-institutional ground of moral desert. In addition, the thesis fits into a deeper explanation of desert in terms of persons connecting themselves to values. The contribution thesis also unifies several different types of act-based desert, specifically deserved profits and losses, deserved punishment, and deserved wages. Hence, to the extent that desert-satisfaction is relevant in the selection of an economic system, this result strengthens the case for capitalism.34 REFERENCES Arnold, N. S. 1987. Why profits are deserved. Ethics 97: 387?402 [OpenURL Query Data] Arthur, J. and W. H. Shaw. 1991. eds. Justice and economic distribution, 2nd edn, Prentice Hall Buchanan, A. 1991. Efficiency arguments for and against the market. In Justice and economic distribution, 2nd edn, ed. J. Arthur and W. H. Shaw. Prentice Hall: 182?92 Christman, J. 1988. Entrepreneurs, profits, and deserving market shares. Social Philosophy & Policy 6: 1?16 [OpenURL Query Data] Cohen, G. A. 1979. The labor theory of value and the concept of exploitation. Philosophy and Public Affairs 8: 338?60 [OpenURL Query Data] Demsetz, H. 1972. Wealth distribution and the ownership of rights. Journal of Legal Studies 1: 223?32 [OpenURL Query Data] Feinberg, J. 1970. Justice and personal desert. In his Doing and deserving, Princeton University Press: 55?94 Feldman, F. 1995. Adjusting utility for justice: a consequentialist reply to the objection from justice. Philosophy and Phenomenological Research 55: 567?85 [OpenURL Query Data] Feldman, F. 1997. Desert: reconsideration of some received wisdom. In his Utilitarianism, hedonism, and desert. Cambridge University Press: 175?92 Hurka, T. 2001. The common structure of virtue anddesert. Ethics 112: 6?31 [CrossRef] [OpenURL Query Data] Kagan, S. 1999. Equality and desert. In L. P. Pojman and O. McLeod, eds., What do we deserve? Oxford University Press: 298?314 Kershnar, S. 2002. Private property rights and autonomy. Public Affairs Quarterly 16: 231?58 [OpenURL Query Data] Lomasky, L. 1987. Persons, rights, and the moral community. Oxford University Press McLeod, O. 1999. Desert and wages. In L. P. Pojman and O. McLeod, eds., What do we deserve? Oxford University Press: 271?82 Metz, T. 2000. Arbitrariness, justice, and respect. Social Theory and Practice 26: 25?45 [OpenURL Query Data] Miller, D. 1976. Social justice. Oxford University Press Nagel, T. 1982. Moral luck. In Free will, ed. G. Watson. Oxford University Press: 174?86 Nathanson, S. 1998. Economic justice. Prentice Hall Parfit, D. 1984. Reasons and persons. Oxford University Press Parker, R. 1991. Blame, punishment, and the role of result. In Philosophy of Law, 4th edn, ed. J. Feinberg and H. Gross. Wadsworth: 732?38 Pinker, S. 2002. The blank slate. Viking Rawls, J. 1971. A theory of justice. Harvard University Press Sadurski, W. 1985. Giving desert its due: social justice and legal theory. Reidel Schweickart, D. 1991. Capitalism, contribution, and sacrifice. In Justice and economic distribution, 2nd edn, ed. J. Arthur and W. H. Shaw. Prentice Hall: 168?81 Sen, A. 1992. Inequality reexamined. Harvard University Press Sher, G. 1987. Desert. Princeton University Press Simmons, A. J. 1995. Historical rights and fair shares. Law and Philosophy 14: 149?84 [CrossRef] [OpenURL Query Data] Slote, M. A. 1999. Desert, consent, and justice. In L. P. Pojman and O. McLeod, eds., What do we deserve? Oxford University Press: 210?23 Waldron, J. 1983. Two worries about mixing one's labour. Philosophical Quarterly 33: 39?42 [OpenURL Query Data] Waldron, J. 1992. Superseding historical injustice. Ethics 103: 4?28 [CrossRef] [OpenURL Query Data] Watson, G. 1977. Skepticism about weakness of will. Philosophical Review 86: 316?39 [OpenURL Query Data] Wertheimer, A. 1996. Exploitation. Princeton University Press Zaitchik, A. 1977. On deserving to deserve. Philosophy and Public Affairs 6: 370?88 [OpenURL Query Data] Notes 1 See, e.g., Feldman (1995), Kagan (1999), and Hurka (2001). 2 Parfit (1984: 493?502). 3 The idea for these points comes from Fred Feldman (1997), especially 182?86. 4 Arnold (1987: 390 n.6). 5 This condition is part of (1)(a) if benefit or cost X is individuated in terms of both type and amount. 6 The idea for this point comes from Thad Metz. 7 A developed argument for this claim is found in Watson (1977: 316?39). 8 For the autonomy defense of private property rights, see Loren Lomasky (1987) and Kershnar (2002). 9 For arguments that current property rights can survive claims of injustice in the distant past, see Simmons (1995) and Waldron (1992). 10 Demsetz (1972). 11 The notion of a moral-free zone as ambiguous between the claim that the transacting parties are morally permitted to arrive at any price and that any price is equally good. I mean the former, whereas some factors, e.g., deserved wages, affect the latter. 12 This account comes from Werthemier (1996 : 20ff., 211). An account of disproportionate gain should focus on whether the stronger party is taking unfair advantage of the weaker party, not on whether the stronger party takes advantage of an unfairness to the weaker party. In the latter case the stronger party is not the cause of the unfairness to the weaker party. Focusing on the latter would make most transactions, no matter how rational and appropriate given the background conditions, unfair if the background conditions reflect injustice or unfairness, and this seems counterintuitive. Unfairness in the context of a disproportionate gain is a property of particular transactions not a property of macrostate distributions of wealth (or other resources). This distinction is useful because exploitation can take place within a just economic system and because a non-exploitative transaction can take place within the context of an unjust economic system. Also, since the wrong-making features of distributive injustice and economic exploitation might differ, the two should be kept separate for the purposes of analyzing the permissibility of different acts. 13 Buchanan (1991: 184ff). 14 This view can be seen in Parker (1991: 732?38). 15 One might invoke proximate cause in explaining this intuition. However, if proximate cause is a stand-in for moral responsibility, and I think it is, then we still need an explanation as to why the license-plate thieves are not responsible for this further harm. 16 For a discussion of a type of desert grounded by hard work, see, e.g., Sher (1987: ch. 4). For a discussion of desert grounded by sacrifice, see, e.g., Feinberg (1970: 55?94). For a discussion of desert grounded in contribution, see, e.g., Slote (1999 : 210?23). Some theorists argue that there are several desert types and that these types have different types of grounds. See, e.g., Sher, ibid. and McLeod (1999: 271?82). 17 An argument against the marginal product account can be found in Sen (1992). An argument against the labor theory of value can be seen in Cohen (1979: 338?60). 18 Thad Metz argues that contribution is not substantially under our control because it is influenced by endowment (cf. Metz 2000 ). Endowment also affects the intensity, duration, and decision to exert effort and the planning that directs it. It is not clear why endowment substantially undermines our control for the former and not the latter, especially if we consider, as I argue below, the role of planning. 19 The idea from this paragraph comes from George Sher, ?Effort and imagination?, unpublished manuscript. 20 Arnold (1987: 387?402). 21 This point can also be seen in Christman (1988: 11?12) and Arnold (1987: 395). For example, persons who risk their health by copying the stunts in Jackass ? the movie (Paramount 2002) do not deserve profit. 22 Postponement of gratification likely makes sacrifice the basis for desert and past a certain amount of wealth it is no longer clear that postponing gratification is a sacrifice. Schweickart (1991: 179). 23 The emphasis on hard work can be seen in Miller (1976: 109), Sadurski (1985: 134?35), and Sher (1987: ch. 4); the emphasis on sacrifice in Feinberg (1970: 55?94). 24 Sher (1970: 60?62). 25 The idea for this point comes from Waldron (1983: 39?42). 26 Christman (1988: 12?15). A similar point can be seen in Nathanson (1998: 56). 27 Christman, 13. 28 The idea for this point comes from Sher (1970: ch. 2, esp. 26). Alan Zaitchik points out that the idea behind this attack on desert is that since desert cannot rest on a foundational base it must rest on an impossible infinite regress, Zaitchik (1977: 373). Both arguments are aimed at Rawls (1971: 104, 310?15). 29 The idea for this point comes from Gillian Brock. 30 The notion that desert must rest on factors that we control has been challenged, Feldman (1997: 178?84). I claim that Feldman's counterexamples (e.g., injured parties owed compensation) capture claims rather than desert. 31 This point can be seen in Nagel (1982: 181?82). 32 Pinker (2002: 372?78). 33 Schweickart (1991: 175?76). 34 I am grateful to John Christman, Thad Metz, and George Schedler for the extremely helpful comments and criticisms of this paper. From checker at panix.com Fri Jul 1 17:38:40 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:38:40 -0400 (EDT) Subject: [Paleopsych] Review of Richard Swinburne, ed., Bayes's Theorem Message-ID: Bayes's Theorem (Proceedings of the British Academy, vol. 113), edited by Richard Swinburne, Oxford University Press, 2002, 160 pages Reviewed by Paul Anand, The Open University and Health Economics Research Centre, University of Oxford Economics and Philosophy (2005), 21:139-142 Cambridge University Press DOI: 10.1017/S026626710422051X This short collection of essays celebrates the 200th anniversary of Bayes's Theorem, famous or notorious depending on one's perspective, as the basis for a non-classical approach to statistical inference. Given the steady rise of Bayesianism in econometric and related statistical work, a volume ? even one by philosophers ? devoted to the theorem responsible should be of considerable interest to many scientists, economists and econometricians included. Comprising four papers based on presentations given to a British Academy symposium, an additional article by David Miller, a biographical note by G. A. Barnard first published by Biometrica in 1958 and a version of Reverand Thomas Bayes's original essay presented posthumously by Richard Price to the Royal Society in 1763, the collection highlights the existence of a small (and important) body of work that continues to examine conceptual issues in the foundations of statistics. In this review, I shall make brief comments on contributions but say most about the papers by Sober and Howson. In a substantial introduction (chapter 1), Richard Swinburn locates Bayes's Theorem in a world that permits many concepts of probability. He begins with some preliminary remarks on the meaning of probability and a distinction between logical or evidential probability on the one hand, and statistical probability on the other, due to Carnap. He offers a summary of some probability axioms stated as relating to classes first, and then to propositions, and though he says little about the difficulties that are said to follow from the latter approach, he provides a simple account of the Dutch Book argument claiming that it is strongest when applied to bets that take place simultaneously (a point that parallels a similar issue in the literature on rationality and intransitive preference ? see, for example, Anand (1993)). The introduction then develops as a thesis about limits to the justification of prior probabilities: only a priori criteria, including the concept of simplicity, can justify a world view in which certain (probability affecting) factors operate everywhere, so it is maintained. It may be confusing to have an editor who claims to take a line different to that of his contributors (and all of them at that) on the importance of a priori criteria but the disparity is not one that seems to interfere with the analysis that follows. The essays themselves begin with a chapter by Elliot Sober whose title ?Bayesianism ? its Scope and Limits? indicates, precisely in my view, how we should think of questions concerning Bayesian inference. Sober's description of the issues is clear though it might have benefited from a discussion of the way in which Bayesian inference is actually used by advocates of this approach to inference. (The later chapter by Philip Dawid, a statistician, fills this gap.) Nonetheless, the difficulties faced by a version of Bayesianism based on priors grounded in insufficient reason, and the shift to a subjective approach which fails the objective needs of scientific method, are well made. These observations leave open the possibility that Bayesianism with subjective priors might be valid in decision theory even if it were not useful for scientific inference ? a position that seems consistent with Sober's position but one which awaits justification. Sober's discussion proceeds to an examination of likelihoodism ? an emphasis on prob (observation/hypothesis) as opposed to probabilistic approaches which emphasise prob (hypothesis/observation) ? which he uses as a foil, ultimately against, Bayesianism. The analysis begins by noting that likelihoods are ?often more objective than prior probabilities,? notes an absurd consequence of the likelihood approach and goes on to argue that what likelihoodism really provides is an account of support for a hypothesis, rather than a measure of its overall plausibility. The discussion is interesting but is linked to statistical inference in biological applications in such a way that many economists would, unfortunately, not find it easy to draw lessons from for their own work. However, the same cannot be said for remarks designed, successfully in my view, to interest readers in Akaike's (1973) framework for (econometric) model selection which aims at finding models that are predictively accurate but not necessarily true. Anyone who might use empirical evidence could profitably read this section which casts Akaike's approach as an alternative framework to Bayesianism. The fact that it penalises less simple models may well be a significant advantage over Bayesianism, but the claim that this can be justified on principled grounds remains to be proven. At least from this discussion (which its author allows is not comprehensive), it seems that Akaike's approach to predictive accuracy parallels the move from R-squared to adjusted R-squared statistics. However, just because Akaike's statistic makes a deduction for parameters used and calls the result an unbiased estimate of predictive accuracy does not, of itself, tell us that simplicity is, on conceptual grounds, epistemically relevant, a point echoed in remarks by the following contributor. In chapter 3, Colin Howson provides a substantial and wide-ranging essay in which he argues, essentially, for what he calls the ?Second Bayesian Theory? (SBT) by which he appears to mean the probabilistic component of theories by Ramsey and de Finetti. (Economists normally refer to this as the theory of subjective probability and some may not be aware of but want to consult Howson and Urbach's (1993 ) comprehensive and witty introduction to the literature of which the chapter is part.) This paper is divided into a longer part that surveys, over several subsections, some of the background followed by a shorter, more technical and focused discussion of issues surrounding a claim about the logical foundations of the probability calculus. The survey section deals with topics that include Fisher and significance tests, Lindley's paradox, likelihood, priors and simplicity with the aim of raising concerns that Bayesianism can resolve which the classical approach and its variants may not. The second, shorter part of Howson's essay is devoted to a discussion, centered around his previously published theorem, of the consistency of SBT. (This is difficult as it brings together ideas from optimisation and logic and then does a lot of work using non-technical language.) Understanding the relations between logic and probability and the logical basis of a probabilistic calculus are crucial issues touched on here though I believe that further comments would have helped the reader assess the project. Howson shows that SBT is an ?authentic logic? but given that SBT (from de Finetti on) is an axiomatic theory anyway, I wonder how Howson's arguments for consistency relate to and compare with the claim that SBT is normatively desirable on account of its axioms. One might also ask if being an authentic logic would turn out to distinguish between alternatives to SBT ? we now know that a wide range of non-expected, intransitive utility theories can be formalised and normatively justified so it would be useful to know how much significance we should attribute to being an authentic logic. Put differently, if the ?probability axioms are the complete logic of probable inference? as Howson states, what, if anything, does this tell us about the merits of alternative concepts of credence or uncertainty? This is not a criticism of Howson but it is a reminder that the revolution in the foundations of decision theory over the past 30 years means that nothing about the theory of choice (probability included) can be taken for granted. Of the remaining three contributions, it is fair to say that that by Dawid is the most applied and decision theoretic. His discussions of legal decisions provides a good (if too rare) mix of application and foundational issues that could be useful for those who teach foundations of decision theory. There is a tendency for some Bayesians to propose the approach as a panacea for a range of inference problems that require different concepts of credence (rather than meanings of probability) and there is some evidence of that tendency here too. Nonetheless, the framework for comparing approaches Dawid develops has been nicely honed and repays reading whatever one's own standpoint. In contrast, John Earman's chapter has a more historical flavour (unlike his 1992 Bayes or Bust ) taking, as it does, themes that tend to interest Bayesians and examining them in the context of Hume's analysis of evidence for miracles. There are some potential points of contact with modern concerns though these are not Earman's primary focus and the demolition job he performs is likely to be of most interest to Hume scholars. The last chapter in this collection, David Miller's discussion of the propensity view seems interesting in its own right though I did feel there was a question as to whether the paper is really sufficiently relevant to the rest of the debate to merit inclusion. That quibble apart, this book provides researchers on the edge of the field with a sense of some key current concerns as well as a useful reference point for those wanting to explore the foundations of statistics (or decision theory) in more depth. REFERENCES Anand, P. 1993. Foundations of rational choice under risk. Oxford University Press (reprint 2002) Earman, J. 1992. Bayes or bust. MIT Press Howson, C. and P. Urbach. 1993. Scientific reasoning: the bayesian approach. Open Court (2nd edition) From checker at panix.com Fri Jul 1 17:38:46 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:38:46 -0400 (EDT) Subject: [Paleopsych] Boston Globe: Daddy, what did you do in the men's movement? Message-ID: Daddy, what did you do in the men's movement? http://www.boston.com/news/globe/ideas/articles/2005/06/19/daddy_what_did_you_do_in_the_mens_movement?mode=PF Robert Bly may have retreated to his sweat lodge, but the reconsideration of masculinity and fatherhood he helped initiate hasn't ended. By Paul Zakrzewski | June 19, 2005 THE LAST TIME most of us heard a joke about grown men getting in touch with themselves by beating on drums or hunkering down in sweat lodges, the first Gulf War was in full swing, and Nirvana ruled the airwaves. But for a brief moment in the early 1990s, the ''men's movement" was everywhere you looked, from Jay Leno to ABC's ''20/20" to the pages of Esquire and Playboy. And if the movement was never particularly large or diverse - according to Newsweek, about 100,000 mostly white, middle-aged men had attended a patchwork of weekend retreats, conferences, and workshops by 1991, when the movement peaked - it struck a chord with a country that appeared confused about contemporary manhood. Books by Sam Keen, Michael Meade, and other leading figures in the movement sold hundreds of thousands of copies, while Robert Bly's ''Iron John," a cultural exegesis on wounded masculinity in the form of an obscure fairy tale, spent more than 60 weeks on the New York Times bestseller list. Arguably, the Bly-style mythopoetic men's movement, as it was known, can be traced back to the late 1970s, to men's consciousness-raising groups and masculinity classes in places like Cambridge, Berkeley, and Ann Arbor. However, it was Bly's collaboration with Bill Moyers on the 1990 PBS documentary ''A Gathering of Men" that turned the groundswell of retreats and gatherings into a national phenomenon. With his lilting Minnesota brogue and occasional impish aside, the grandfatherly Bly talked about the Wild Man, avatar of a kind of inner masculine authenticity lost during the Industrial Revolution, when fathers left the homestead (and their sons) behind and went to work in factories. With the lore and lessons of manhood no longer passed on to younger generations, men lost a certain kind of male identity, even the sense of life as a quest. ''Many of these men are not happy," Bly wrote of today's ''soft males," as he called them. ''You quickly notice the lack of energy in them. They are life-preserving, but not exactly life-giving." Today, however, the drums have largely fallen silent. While there are still weekend retreats - for example, the ManKind Project, which boasts more than two-dozen centers worldwide, conducts ''New Warrior Training Adventures" for some 3,000 men every year - these are mostly affairs for the already initiated. ''The men's movement as we knew it has gone underground," says Ken Byers, a San Francisco-based writer and therapist who attended dozens of retreats in the early 1990s. ''Unless you're involved in that underground, there's very little way for the average American man to connect with it." Of course, Bly's mythopoetic movement was only one of several, often contradictory men's movements. Since the 1970s, ''men's rights" advocates have pushed for fathers' parental rights, while profeminist groups such as the National Organization of Men Against Sexism and the national network of Men's Resource Centers want men to become more accountable for sexism, homophobia, and violence. And in the wake of Bly, new mass men's movements seized the media spotlight. In 1995, Nation of Islam leader Louis Farrakhan organized the Million Man March, to inspire African-American men to rebuild their lives and neighborhoods. Meanwhile, by the mid-1990s, the Christian evangelical Promise Keepers were packing hundreds of thousands of men into football stadiums each year for rallies that, like the ''muscular Christianity" movement a century before, encouraged them to reclaim their masculinity by retaking control of their families with the help of Jesus Christ. So what happened to Bly's mythopoetic movement? The negative media coverage, such as Esquire's ''Wild Men and Wimps" spoof issue in 1992, didn't exactly help. But there were other factors, too. For one thing, even many of the men not inclined to dismiss Bly-style gatherings as silly found themselves mystified by the rarefied Jungian concepts tossed around the campfires like so many marshmallows. ''Many of the men I saw worked really hard at trying to figure out the mythology, but they just weren't getting it in the belly," says Byers, echoing the title of Sam Keen's bestselling book. Unlike the Promise Keepers, which held weekly check-in sessions, there was no follow-up work done once participants left their weekend retreats. ''It was an event, a spectacle," says Michael Kimmel, professor of sociology at SUNY Stony Brook and the author of ''Manhood in America," a 1997 cultural history of masculinity. ''You were supposed to be changed by it and then go home." Part of the problem, too, was the mythopoetic movement's complex relationship to feminism. On the one hand, some feminists construed Bly's attack on feminized males as reactionary. ''I'd hoped by now that men were strong enough to accept their vulnerability and to be authentic without aping Neanderthal cavemen," Betty Friedan told The Washington Post back in 1991. (Bly denied that there was anything anti-woman about his ideas.) What's more, the movement itself could never get beyond the fact that unlike the feminist movement - which itself had lost steam by the 1990s, as women achieved more economic and financial power - Bly and his followers never had any clear political agenda to drive them forward. Then again, perhaps the death of the men's movement has been greatly exaggerated. Like the women's movement, it may just be that its biggest lessons have simply been absorbed into the culture, minus the pagan fairy tales and faux Native American rites. For example, it's evident to any man who carves out time in his busy week to meet his buddies for a drink that, as Bly suggested, men benefit from time spent in ''ritual space" - that is, with other men. (Full disclosure: For the last year I've met with other men in their 30s and 40s for a weekly discussion group in Jamaica Plain, where we talk about everything from career issues to complicated relationships with our fathers.) And whether or not they can tell their Wild Man from their King (another figure in Bly s complex mythological scheme), many younger men want to be more engaged in family life than their own fathers were. In 1992, about 68 percent of college-educated men said they wanted to move into jobs with more responsibility, according to a recent study by the Families and Work Institute. A decade later, the number fell to 52 percent. Meanwhile, a 2000 study by the Radcliffe Public Policy Center found that the job characteristic most often ranked as very important by men ages 21 to 39 was a work schedule that allowed them to spend more time with their families. Seventy percent said they were willing to sacrifice pay and lose promotions to do so. Still, the reality of being a good father often poses more of a challenge for these young men than they expect, often in ways that Bly himself might have explained. ''One of the central problems is that the image that men have of immersing themselves in families is a very maternal one," says Mark O'Connell, a Boston-based psychologist and the author of the recent book ''The Good Father: On Men, Masculinity, and Life in the Family" (Scribner). ''They are trying to follow something that isn't altogether authentic and reflective of the different strengths that men bring to the table." America, of course, is a different place than it was when Bly wrote his best seller. Today, when men get together in organized men's groups, they are more likely to talk about Jesus Christ than Iron John. Nevertheless, there's more than a touch of Bly in John Eldredge's ''Wild at Heart: Discovering the Secret of a Man's Soul," an evangelical call-to-arms that has sold 1.5 million copies since it was published in 2001 and that has helped launch a series of weekend workshops. Men still go into the woods, but instead of wrestling with the Wild Man, they meet Jesus, described as a kind of fierce, unfettered energy that transforms ''really nice guys" (a version of Bly's ''soft males") into passionate beings ready to tackle life's adventures, including romantic relationships. ''Not every woman wants a battle to fight, but every woman wants to be fought for," Eldredge observes in a passage that might have been written before Betty Friedan was born. Meanwhile, after years of dwindling attendance due to financial problems, the Promise Keepers are staging a comeback this summer, hoping to fill 20 stadium rallies across the country. And in March, the first annual Catholic Men's Conference, inspired by the Promise Keepers, attracted 2,200 men to Boston, who came to listen to speakers ranging from Archbishop Sean P. O'Malley to ''Passion of the Christ" star Jim Caviezel to Bush administration official James Towey. (According to organizer Scot Landry, the event's success was fueled by the growing number of men's fellowship groups in the Boston Archdiocese, which have spread from a handful of parishes 5 years ago to between 30 and 50 today.) The emphasis was on the importance of traditional Catholic teachings on sexuality and the family, under which men - not their wives - are called to be ''the spiritual leaders of your home," as one speaker put it. Even if we're not likely to see maverick poets and Jungian therapists on television specials and magazine covers again any time soon, one thing is clear. The Bly-style men's movement highlighted a powerful urge for men to commune with each other that persists today, even among those who wouldn't be caught dead within miles of a drumming circle. ''There was something about Bly's language and approach that was easy to caricature," says O'Connell. ''But he was on to something really important, and a lot of what he was talking about got lost in translation." Paul Zakrzewski is the editor of ''Lost Tribe: Jewish Fiction from the Edge" (Harper Perennial). He lives in Jamaica Plain. E-mail [2]pzak at verizon.net. From checker at panix.com Fri Jul 1 17:39:23 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:39:23 -0400 (EDT) Subject: [Paleopsych] NYT: With Music for the Eye and Colors for the Ear Message-ID: With Music for the Eye and Colors for the Ear New York Times, 5.7.1 http://www.nytimes.com/2005/07/01/arts/design/01kimm.html [Another review of the exhibit, whose terrific introduction by the curators, and which was atteneded by 150 people, perhaps a record at the Hirshhorn, I reported on earlier. By [3]MICHAEL KIMMELMAN Washington "Visual Music" is a fine-tuned, highly diverting, deceptively radical exhibition about the relationship of music and modern art, lately arrived here at the Hirshhorn Museum. In its hippy-trippy way, it rewrites a crucial chapter of history. Its subtitle is "Synaesthesia in Art and Music Since 1900." Aristotle formulated the idea that each of the five senses - smell, taste, touch, hearing and sight - had its own proper and distinct sphere of activity. There were overlaps, he said (movement pertained both to sight and touch); and he speculated that the mysteries of color harmony might have something to do with musical harmony, an idea that would resonate for centuries. Musical harmony, as an expression of geometry, was thought to be useful to the study of art and architecture from the Renaissance on. But the notion that there was an essential separation among the sensual spheres persisted into the early 19th century. At the same time reports began to emerge of rare people who said they experienced two sensations simultaneously: they saw colors when they heard sounds, or they heard sounds when they ate something. The condition was called synaesthesia. It's no coincidence that scientific interest in synaesthesia coincided with the Symbolist movement in Europe, with its stresses on metaphor, allusion and mystery. Synaesthesia was both metaphorical and mysterious. Scientists were puzzled. People who claimed to have it couldn't agree about exactly what they experienced. "To ordinary individuals one of these accounts seems just as wild and lunatic as another but when the account of one seer is submitted to another seer," noted the Victorian psychologist and polymath Sir Francis Galton in 1883, "the latter is scandalized and almost angry at the heresy of the former." I have come across via the color historian John Gage an amusing account from some years later by the phonologist Roman Jakobson, who studied a multilingual woman with synaesthesia. The woman described to him perceiving colors when she heard consonants and vowels or even whole words: "As time went on words became simply sounds, differently colored, and the more outstanding one color was, the better it remained in my memory. That is why, on the other hand, I have great difficulty with short English words like jut, jug, lie, lag, etc.: their colors simply run together." Russian, she also told Jakobson, has "a lot of long, black and brown words," while German scientific expressions "are accompanied by a strange, dull yellowish glimmer." "Visual Music" is full of strange, glimmering yellowish and other colored shapes. What might visual art look like if it were akin to music? That's the question the various artists here asked themselves - a question that goes back to Richard Wagner, the Symbolists' patron saint for his dream of a Gesamtkunstwerk, a universal artwork uniting music and art. Painters like Kandinsky, Frantisek Kupka, Mikhail Matiushin (he was a Russian composer, influenced by Arnold Schoenberg, who like Schoenberg also painted) and Arthur Dove, with whom "Visual Music" begins, elaborated on Wagner's theme. They painted pictures that claimed to have the condition of music - pure abstractions with occasional shapes that resembled staves, musical notes or violins. Through the medium of musical metaphor, in other words, synaesthesia gave birth to abstract art. This is the show's quite radical, if not altogether original, point: that abstraction's history is not just the familiar sequence of isms (Constructivism, Suprematism, Abstract Expressionism, Minimalism) but also the consequence of a particular idea. The idea is synaesthesia. And its protagonists, while including a few famous names like Kandinsky, were on the whole cultish and now forgotten figures or total outsiders to the art world: they were filmmakers, animators, computer geeks and 1960's psychedelic light show performers. Blurring high and low, their legacy represented not a corruption or cul-de-sac of traditional modernism but a parallel strand of it, which has made its way, willy-nilly, right up to the present. The show here ends with digitally enhanced multimedia works by Jennifer Steinkamp, Jim Hodges and Leo Villareal. Organized by Kerry Brougher and Judith Zilczer at the Hirshhorn, and Jeremy Strick and Ari Wiseman at the Los Angeles Museum of Contemporary Art, "Visual Music" originated in Los Angeles, aptly, since much of what's on view consists of films and other moving images, made by artists from California, a few of whom also worked for Hollywood. This was inevitable. Abstract painters in the early 20th century tried to emulate musical attributes, like rhythm, harmony and tonality, but music is temporal. It moves through time. And to suggest temporality or movement in two dimensions via staggered lines, vortices, cones or whatever spatial device, doesn't suffice. So it fell to experimental filmmakers like L?opold Survage, Viking Eggeling, Hans Richter and Oskar Fischinger to pick up from where Kandinsky left off and devise abstract movies, at first silent, then animating musical scores. The shapes they used were pretty much the same as the ones in the paintings - swirling lines, concentric circles, zigzags, confetti bursts that now pulsed, shimmied and flickered. A slew of devices and charming gimmicks followed. The color organ was a clunky box with a silent keyboard, prisms, mirrors and a projector that let a player compose an abstract moving picture. Painters like Daniel Vladimir Baranoff-Rossin? and Stanton Macdonald-Wright, one of its inventors, having come up against the limitations of painting, tried their hands at color organs. New oscilloscopes produced wavy moving patterns that filmmakers like Hy Hirsh could set to jazz or Afro-Cuban music. Len Lye produced cameraless animations by painting straight onto filmstrips. There's a wonderful hand-painted animation by Lye from 1935, "A Colour Box," set to a jaunty tune, which ran as a hit short before feature films in British theaters; it includes, midway through, an advertisement for the postal service, which sponsored Lye, the initials for the post office dancing briefly across the screen. Among my own favorite confections here are ones by Thomas Wilfred, the Whitneys and Jordan Belson. Wilfred, a Danish lute player by training, born in 1889, contrived an instrument he called the clavilux that produced light displays, which, to modern eyes, resemble lava lamps and Hubble space photos before the fact. From Los Angeles, John and James Whitney exploited nascent computer technology, starting in the 1950's, to compose hypnotic, multiscreen abstract films set to raga and other forms of zone-out music. At the Hirshhorn you can recline in the darkness on huge, cushy ottomans, while the Whitneys' images play retinal games with your eyeballs. And from San Francisco, Mr. Belson collaborated on polymorphous audiovisual concerts in the late 1950's and early 1960's that set the stage for the era's psychedelic light shows. A few of these, by collectives like Single Wing Turquoise Bird and Joshua Light Show, are screened in a room at the Hirshhorn, minus only the bongs. In turn, such events inspired Mr. Belson toward more mind-bending, kaleidoscopic films suggesting cosmic swirls and mixing different brands of music. Nearly 80 now, he was commissioned by the Hirshhorn to produce a new work for this show, "Epilogue," its lush and misty optics synchronized to a score by Rachmaninoff. All these swimmy works begin to blend together after a while, but what's remarkable about seeing them in one place is precisely that they do look so similar. I said earlier that the experimental films from the 1920's on used the same vocabulary as the paintings from the turn of the century. Likewise, the newest computer-generated installations. "Visual Music," aside from rewriting history, is also a show about failure - the failure of metaphor, which no technology may overcome. In all this time, no perfect way to make art into music has been devised. Squiggly lines and pulsing colors approximate music but they can't ever become it. Aristotle was right. The senses do have their own domains. Music is moving in ways visual art isn't and vice-versa, and that's why they're both necessary. Like Wagner's Gesamtkunstwerk, the dream of making one art that's like another is just a utopian fantasy, born of a peculiarly modern impatience with art's limitations and a misplaced notion that, like science, art needs constantly to advance or else become irrelevant. But art is not science. Its limitations are its virtues. In the meantime it gives us the works here, the best of which are dizzily transporting. From anonymous_animus at yahoo.com Fri Jul 1 18:29:03 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Fri, 1 Jul 2005 11:29:03 -0700 (PDT) Subject: [Paleopsych] shame In-Reply-To: <200507011800.j61I0NR25213@tick.javien.com> Message-ID: <20050701182903.4724.qmail@web30813.mail.mud.yahoo.com> >>When there is "public silence around shame, it doesn't get discussed, it just gets more deeply embedded."<< --I agree. It also gets distorted. If a person can't say "I'm ashamed", it may be easier to translate that into "shame on you!" and pass the emotion to another. I've seen many group squabbles where it was obvious to me (being trusted to some extent with the private thoughts of each member of the group rather than just the official story) that each person was to some extent blaming others out of his own shame. Someone who often feels stupid calling another person "stupid" and so on. Someone with less personal experience with each side may simply think "X values intelligence, and calls Y stupid because Y isn't adhering to that value". That Y may really have done something stupid clouds things further. X may never even know he's transferring his shame to another. >>It would be silly to say that shame doesn't hurt and isn't sometimes very painful, but it does make you think about what you hold dear, whether that be at an individual or collective level, or as a nation. It is one of the emotions that most clearly throw into relief the values we have.<< --That's true as long as introspection is allowed and encouraged. Often, national/racial/religious groups bypass introspection and go directly to blame or attack. Shame becomes a trigger for blaming others. >>Yes. Part of my interest in shame came from thinking about the limits of pride, especially when it's used in queer pride, or fat pride, or whatever. There is a real limit to those politics.<< --Agreed. I would include "American pride" which is a kind of national self-esteem movement. Why work on my flaws or build my strengths if I can be proud of where I live, no matter what? >>Shaming is very limited in its value. It requires that someone stand on high and point the finger. Some strands of feminism have usedshaming, but it's the experiencing of shame rather than the wielding of shame that can be good.<< --That's a good point. Especially if the blamer is to some extent passing on his/her own shame. The "hot potato" game. The question is, how do you encourage people to feel their own shame rather than passing it on in order to feel more innocent? In extreme cases of conflict, each side may be so terrified of feeling shame that it must demonize and attack the other constantly in order to hold the feeling at bay. This can cause whole generations to inherit shame and the mechanisms of denial along with it. Michael ____________________________________________________ Yahoo! Sports Rekindle the Rivalries. Sign up for Fantasy Football http://football.fantasysports.yahoo.com From checker at panix.com Sat Jul 2 15:24:21 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:24:21 -0400 (EDT) Subject: [Paleopsych] Economist: Fusion power: Nuclear ambitions Message-ID: Fusion power: Nuclear ambitions http://www.economist.com/science/PrinterFriendly.cfm?Story_ID=4127211 [Thanks to Sarah for this.] Jun 30th 2005 A step towards commercial fusion power. Perhaps [4]Get article background THIS week, an international project to build a nuclear-fusion reactor came a step closer to reality when politicians agreed it should be constructed in France rather than in Japan, the other country lobbying to host it. The estimated cost is $12 billion, making it one of the most expensive scientific projects around--comparable financially with the International Space Station. It is scheduled to run for 30 years, which is handy since, for the past half century, fusion advocates have claimed that achieving commercial nuclear fusion is 30 years away. The International Thermonuclear Experimental Reactor (ITER), as the project is known, is intended to be the final proving step before a commercial fusion reactor is built. It would demonstrate that power can be generated using the energy released when two light atomic nuclei are brought together to make a heavier one--a process similar to the one that powers the sun and other stars. Advocates of fusion point to its alleged advantages over other forms of power generation. It is efficient, so only small quantities of fuel are needed. Unlike existing nuclear reactors, which produce nasty long-lived radioactive waste, the radioactive processes involved with fusion are relatively short-lived and the waste products benign. Unlike fossil-fuel plants, there are no carbon-dioxide emissions. And the principal fuel, a heavy isotope of hydrogen called deuterium, is present in ordinary water, of which there is no shortage. The challenges of achieving fusion should not be underestimated. A large volume of gas must be heated to a temperature above that found at the centre of the sun. At the same time, that gas must be prevented from touching the walls of the reactor by confining it in a powerful magnetic field known as a magnetic bottle. The energy released in fusion is carried mostly by neutrons, a type of subatomic particle that has no electric charge and hence cannot be confined by the magnetic bottle. Ensuring that the reactor wall can cope with being bombarded by these neutrons presents a further challenge. The costs involved are immense. The budget for ITER involves spending $5 billion on construction, $5 billion on operating costs over 20 years and more than $1 billion on decommissioning. Yet the reason why taxpayers should spend such sums is unclear. The world is not short of energy. Climate change can be addressed without recourse to generating power from fusion since there are already many alternatives to fossil-fuel power plants. And $12 billion could buy an awful lot of research into those alternatives. Part of the reason why commercial fusion reactors have always been 30 years away is that increasing the size of the reactors to something big enough to be a power plant proved harder than foreseen. But fusion aficionados also blame a lack of urgency for the slow progress, claiming that at least 15 years have been lost because of delays in decision-making and what they regard as inadequate funding. There is some truth in this argument. ITER is a joint project between America, most of the European Union, Japan, China, Russia and South Korea. For the past 18 months, work was at a standstill while the member states wrangled over where to site the reactor in what was generally recognised as a proxy for the debate over the war in Iraq. America was thought to support the placing of ITER in Japan in return for Japan's support in that war. Meanwhile, the Russians and Chinese were supporting France which, like them, opposed the American-led invasion. That France was eventually chosen owes much to the fact that the European Union promised to support a suitable Japanese candidate as the next director general of ITER. Like the International Space Station, ITER had its origins in the superpower politics of the 1980s that brought the cold war to its end as Russia and the West groped around for things they could collaborate on. Like the International Space Station, therefore, ITER is at bottom a political animal. And, like the International Space Station, the scientific reasons for developing it are almost non-existent. They cannot justify the price. References 4. http://www.economist.com/background/displayBackground.cfm?story_id=4127211 E-mail me if you have problems getting the referenced article. From checker at panix.com Sat Jul 2 15:24:27 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:24:27 -0400 (EDT) Subject: [Paleopsych] eSkeptic: The Life and Science of Fred Hoyle Message-ID: eSkeptic: The Life and Science of Fred Hoyle From: Michael Shermer Date: Fri, 01 Jul 2005 00:00:00 -0700 To: eugen at leitl.org Subject: eSkeptic: The Life and Science of Fred Hoyle Reply-To: E-Skeptic riday, July 1st, 2005 --------------------------- To view this newsletter with graphics and formatting, visit: --------------------------- This week's eSkeptic features an announcement for Michael Shermer's upcoming weekend workshop at the Esalen Institute, Science, Spirituality & the Search for Meaning, followed by James N. Gardener's review of Conflict in the Cosmos: Fred Hoyle's Life in Science, a biography by Simon Mitton, published by Joseph Henry Press, ISBN 0309093139. --------------------------- SCIENCE, SPIRITUALITY & THE SEARCH FOR MEANING a weekend seminar led by Michael Shermer August 12th, 8:30pm to August 14th, 11:30am at the Esalen Institute, Big Sur, CA The intellectual and spiritual quest to understand the universe and our place in it is at the core of both science and religion. At the beginning of the 20th century social scientists predicted that belief in God would decrease by the end of the century because of the secularization of society. In fact, the opposite happened. Never in history have so many, and such a high percentage of the population, believed in God and expressed spirituality. To find out why, science historian and social scientist Dr. Michael Shermer has undertaken a monumental study of science, spirituality, and the search for meaning through his numerous writings, presented here for the first time in workshop format. Since humans are storytelling animals, a deeper aspect of this issue involves the origins and purposes of myth and religion in human history and culture. Why is there is an eternal return of certain mythic themes in religion, such as messiah myths, flood myths, creation myths, destruction myths, redemption myths, and end of the world myths? What do these recurring themes tell us about the workings of the human mind and culture? What can we learn from these myths beyond the moral homilies offered in their narratives? What can we glean about ourselves as we gaze into these mythic mirrors of our souls? Humans are not only storytelling animals, we are also pattern-seeking animals, and there is a tendency to find pattern even when none exists. To most of us the pattern of the universe indicates design. For countless millennia we have taken these patterns and constructed stories about how our cosmos was designed specifically for us. For the past few centuries, however, science has presented us with a viable alternative in which we are but one among tens of millions of species, housed on but one planet among many orbiting an ordinary solar system, itself one among possibly billions of solar systems in an ordinary galaxy, located in a cluster of galaxies not so different than billions of other galaxy clusters, themselves whirling away from one another in an expanding cosmic bubble that very possibly is only one among a near infinite number of bubble universes. Is it really possible that this entire cosmological multiverse exists for one tiny subgroup of a single species on one planet in a lone galaxy in that solitary bubble universe? In this workshop, we will explore the deepest question of all: what if the universe and the world were not created for us by an intelligent designer, and instead is just one of those things that happened? Can we discover meaning in this apparently meaningless universe? Can we still find the sacred in this age of science? The answer is yes! ABOUT THE ESALEN INSTITUTE Esalen is, geographically speaking, a literal cliff, hanging precariously over the Pacific Ocean. The Esselen Indians used the hot mineral springs here as healing baths for centuries before European settlers arrived. Today the place is adorned with a host of lush organic gardens, mountain streams, a cliff-side swimming pool, hot springs embedded in a multimillion-dollar stone, cement, and steel spa, and meditation huts tucked away in the trees. Esalen was founded in 1962 by Stanford graduates Michael Murphy and Richard Price and has featured such notable visitors as Richard Feynman, Abraham Maslow, Timothy Leary, Paul Tillich, Carlos Castaneda, and B. F. Skinner. Regardless of your source of spirituality (science, religion, or self), Esalen embodies the integration of body, mind, and spirit. ABOUT THE SEMINAR LEADER Dr. Michael Shermer is the Founding Publisher of Skeptic magazine, the Director of the Skeptics Society, a monthly columnist for Scientific American, the host of the Skeptics Distinguished Science Lecture Series at the California Institute of Technology (Caltech), and the co-host and producer of the 13-hour Fox Family television series, Exploring the Unknown. He is the author of Science Friction: Where the Known Meets the Unknown, about how the mind works and how thinking goes wrong. His book The Science of Good and Evil: Why People Cheat, Gossip, Share Care, and Follow the Golden Rule, is on the evolutionary origins of morality and how to be good without God. He wrote a biography, In Darwin,s Shadow, about the life and science of the co-discoverer of natural selection, Alfred Russel Wallace. He also wrote The Borderlands of Science, about the fuzzy land between science and pseudoscience, and Denying History, on Holocaust denial and other forms of pseudohistory. His book How We Believe: Science, Skepticism, and the Search for God, presents his theory on the origins of religion and why people believe in God. He is also the author of Why People Believe Weird Things on pseudoscience, superstitions, and other confusions of our time. According to the late Stephen Jay Gould (from his Foreword to Why People Believe Weird Things): Michael Shermer, as head of one of America,s leading skeptic organizations, and as a powerful activist and essayist in the service of this operational form of reason, is an important figure in American public life. Dr. Shermer received his B.A. in psychology from Pepperdine University, M.A. in experimental psychology from California State University, Fullerton, and his Ph.D. in the history of science from Claremont Graduate University. Since his creation of the Skeptics Society, Skeptic magazine, and the Skeptics Distinguished Science Lecture Series at Caltech, he has appeared on such shows as 20/20, Dateline, Charlie Rose, Larry King Live, Tom Snyder, Donahue, Oprah, Lezza, Unsolved Mysteries, and other shows as a skeptic of weird and extraordinary claims, as well as interviews in countless documentaries aired on PBS, A&E, Discovery, The History Channel, The Science Channel, and The Learning Channel. REGISTRATION $595, which includes the workshop, housing, and meals at one of the most beautiful locations in the world. Register for the seminar through the Esalen Institute, (not through the Skeptics Society). (831) 667-3038 (programs) (831) 667-3005 (reservations) --------------------------- THE BIG BANG'S STEADY STATE The Life & Science of Fred Hoyle a book review by James N. Gardner The vast scale of the cosmos confounds our imagination. What human mind -- calibrated by natural selection to appreciate intuitively the dimensions of African savannahs, primeval arboreal hideaways, and Ice Age mammoth hunting grounds -- can truly grasp its fathomless enormity? Billions of galaxies, each containing hundreds of billions of stars, those stars probably orbited by trillions of planets, and the entire fabric of spacetime expanding outward like the surface of an inflating balloon -- this is the surpassingly strange picture of our universe that constitutes the consensus paradigm of modern cosmology. That vision is centered on the premise of a Big Bang -- a primordial explosion that launched the whole shebang hurtling outward at breakneck speed -- which seems, from a commonsense perspective, perfectly outrageous. What came before the Big Bang, we wonder? What caused this peculiar genesis event? Could the cosmos really have been born ex nihilo, for no apparent reason and from the loins of nothing at all? These were the puzzles that led a giant of British astronomy -- Fred Hoyle -- to suggest a dramatic alternative: the steady-state theory, which hypothesized that the universe is eternal and ever-expanding and that the cosmic storehouse of matter is constantly replenished through a process of continuous creation. As Simon Mitton demonstrates in this superb new biography of Hoyle, the great scientist's genius lay in his ability to resist the temptation to surrender to mainstream orthodoxy. While Hoyle's cosmological theory may have turned out to have been spectacularly wrong (more on this below), what cannot be denied is that his stubborn unwillingness to bow to conventional wisdom was a valuable intellectual asset that benefited the entire scientific community. As Mitton puts it: Hoyle's personal contribution to the rebirth of British astronomy came from his outstanding ability to think outside the box... An enduring feature of Hoyle's character was that in every sense he never let setbacks, rejections, or political maneuvers deflect him from his research agenda. He always had a deep conviction that in his "search for the truth," which is how he expressed his life's mission, any opponent should be able to provide a counterargument from experiment or direct observation. He declined all opposition based on semantic arguments invoking the philosophy of science, or the deployment of a paradigm, or appeals to common sense. Iconoclasm and catholicity of scientific interest were the two key markers of Hoyle's long and conflict-laden life. As astrophysicist Owen Gingerich observes in a thoughtful foreword to Conflict in the Cosmos, these characteristics -- deeply rooted in Hoyle's hard-scrabble background -- were both his greatest strength and the source of his ultimate undoing: Fred Hoyle was the quintessential outsider, entering Emmanuel College Cambridge from an impoverished family background and with a distinct Yorkshire accent, and leaving Cambridge in a misguided huff 39 years later. But in between he ascended into the highest ranks of British science, almost single-handedly returning Britain to the top echelons of international theoretical astrophysics and setting it on the path toward excellence in observational astronomy. It is a stirring Dickensian story of an inquisitive, rough-hewn lad making the grade in the tightly traditional world of Cantabrigian academia, yet with the depths of a Greek tragedy where the flawed hero finally becomes an outcast. The 39-year interregnum was the central chapter in the scientist's life -- a tumultuous period characterized by heroic accomplishment, intense controversy, and an extraordinary level of celebrity, which Hoyle achieved both as a popular BBC commentator and as a highly successful science fiction writer. As Mitton points out: After 1950, Fred Hoyle was a very public figure at home and abroad... His broadcasts for the BBC in 1950 were just extraordinary and brought him immediate fame as a gifted expositor. With his gritty Yorkshire manner, his ability to be picturesque using words alone, and the universe as his topic, he transformed the BBC's approach to academic lectures, persuading them of the benefits of a less donnish style of presentation. His lectures for radio audiences set the prelude for a brilliant parallel career as a popular science and science fiction writer... In science fiction, his first novel, The Black Cloud, remains his best, having now acquired cult status: In 2004, an opinion poll conducted by the British newspaper The Guardian to find the most accomplished science fiction writers placed Hoyle in third position! Hoyle is remembered most vividly for the idea about which he was famously mistaken: that the universe exists in a steady state, with the stockpile of atoms in an eternally expanding cosmos continuously refilled by the constant creation of new matter. Normally, falsified scientific hypotheses like the steady-state conjecture are tossed unceremoniously in the dustbin of intellectual history, serving at best as amusing footnotes to the main body of orthodox theory (think of Darwin's misplaced reliance on Lamarckism as a subsidiary engine of evolution in The Origin of Species). But, once again, Hoyle confounds tradition. Because he was both passionate and brutally honest about the implications of his steady-state hypothesis, Hoyle was able to foment a heated intellectual debate that significantly advanced our understanding of the universe, despite the fact that his particular conjecture turned out to be deeply flawed. As Mitton notes, "What is extraordinary about Fred Hoyle's science is that his impact derives equally from when he was right and when he was wrong!" If Hoyle was wrong about the nature of the process of cosmogenesis, he was spectacularly right about an equally profound mystery: the origin of the chemical elements. In what is surely his most important contribution to astrophysics, Hoyle and three collaborators were able to demonstrate rigorously in their famous B2FH scientific paper that all of the elements of the periodic table except the lightest are forged in the hearts of giant supernovae, under a variety of physical conditions, through a process known as nucleosynthesis. It is this process of stellar alchemy, Hoyle and his colleagues showed, that accounts for the richness and complexity of the chemical palette of the universe, which in turn accounts for the possibility of life. In the midst of this monumental accomplishment, Hoyle stumbled across a deep mystery that eventually lured him away from the shoreline of genuine science out onto the trackless sea of metaphysical speculation: the apparent fine-tuning of nature evidenced by the details of the process through which the element carbon is synthesized. This discovery provoked Hoyle's most controversial conjecture: the notion that the universe appeared to be deliberately fine-tuned to favor the emergence of carbon-based life. As Hoyle wrote late in his life: The issue of whether the universe is purposive is an ultimate question that is at the back of everybody's mind... And Dr. [Ruth Nanda] Ashen has now raised exactly the same question as to whether the universe is a product of thought. And I have to say that that is also my personal opinion, but I can't back it up by too much of precise argument. There are many aspects of the universe where you either have to say there have been monstrous coincidences, which there might have been, or, alternatively, there is a purposive scenario to which the universe conforms. The debate over this portentous issue rages on to this day, fueled by the recent discovery of the monstrously large landscape of alternate versions of low-energy physics mathematically allowed by M-theory, only a tiny fraction of which would permit the emergence of anything resembling our own universe and of carbon-based life. Indeed, that discovery has lead many cutting-edge cosmologists to overlay a refinement of Big Bang inflation theory called eternal chaotic inflation with an explanatory approach that has been traditionally reviled by most scientists known as the weak anthropic principle. (The weak anthropic principle merely states in tautological fashion that since human observers inhabit this particular universe, it must perforce be life-friendly or it would not contain any observers resembling ourselves.) Eternal chaotic inflation, invented by Russian-born physicist Andrei Linde, asserts that instead of just one Big Bang there are, always have been, and always will be, an infinite multiplicity of Big Bangs going off in inaccessible regions all the time. These Big Bangs create a vast horde of new universes constantly and the whole ensemble constitutes a multiverse. One gets the uneasy feeling that if this current theorizing turns out to be correct, Fred Hoyle may have been on the right track all along! Perhaps the multiverse is eternal. Perhaps there is a process of continuous creation (a.k.a. eternal chaotic inflation) as opposed to a one-off genesis event (i.e., a single Big Bang). Maybe the only thing Fred Hoyle truly failed to grasp was the sheer, unexpected grandeur of steady-state cosmogenesis. Hoyle believed that the continuous-creation process yielded "no more than one atom in the course of a year in a volume equal to St. Paul's Cathedral." This is an image of a natural process comfortably within the confines of our biologically evolved human imagination. But if Linde and his colleagues are correct, the process of continuous creation operates at a scale utterly beyond our capacity to physically envision it -- not mere atoms but entire new baby universes are continuously created in an eternal process with striking parallels to Hoyle's discarded steady-state cosmological theory. --------------------------- The eSkeptic newsletter is published (almost) weekly by the Skeptics Society, ISSN 1556-5696. Subscribe to eSkeptic by sending an email to . Unsubscribe by sending an email to . Contact us at . From checker at panix.com Sat Jul 2 15:24:34 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:24:34 -0400 (EDT) Subject: [Paleopsych] Scientific American: Inconstant Constants Message-ID: Inconstant Constants http://www.sciam.com/print_version.cfm?articleID=0005BFE6-2965-128A-A96583414B7F0000 5.5.23 Do the inner workings of nature change with time? By John D. Barrow and John K. Webb Some things never change. Physicists call them the constants of nature. Such quantities as the velocity of light, c, Newton's constant of gravitation, G, and the mass of the electron, m[e], are assumed to be the same at all places and times in the universe. They form the scaffolding around which the theories of physics are erected, and they define the fabric of our universe. Physics has progressed by making ever more accurate measurements of their values. And yet, remarkably, no one has ever successfully predicted or explained any of the constants. Physicists have no idea why they take the special numerical values that they do. In SI units, c is 299,792,458; G is 6.673 X 10^-11; and m[e] is 9.10938188 X 10^-31--numbers that follow no discernible pattern. The only thread running through the values is that if many of them were even slightly different, complex atomic structures such as living beings would not be possible. The desire to explain the constants has been one of the driving forces behind efforts to develop a complete unified description of nature, or "theory of everything." Physicists have hoped that such a theory would show that each of the constants of nature could have only one logically possible value. It would reveal an underlying order to the seeming arbitrariness of nature. In recent years, however, the status of the constants has grown more muddled, not less. Researchers have found that the best candidate for a theory of everything, the variant of string theory called M-theory, is self-consistent only if the universe has more than four dimensions of space and time--as many as seven more. One implication is that the constants we observe may not, in fact, be the truly fundamental ones. Those live in the full higher-dimensional space, and we see only their three-dimensional "shadows." Meanwhile physicists have also come to appreciate that the values of many of the constants may be the result of mere happenstance, acquired during random events and elementary particle processes early in the history of the universe. In fact, string theory allows for a vast number--10^500--of possible "worlds" with different self-consistent sets of laws and constants [see "The String Theory Landscape," by Raphael Bousso and Joseph Polchinski; Scientific American, September 2004]. So far researchers have no idea why our combination was selected. Continued study may reduce the number of logically possible worlds to one, but we have to remain open to the unnerving possibility that our known universe is but one of many--a part of a multiverse--and that different parts of the multiverse exhibit different solutions to the theory, our observed laws of nature being merely one edition of many systems of local bylaws [see "Parallel Universes," by Max Tegmark; Scientific American, May 2003]. No further explanation would then be possible for many of our numerical constants other than that they constitute a rare combination that permits consciousness to evolve. Our observable universe could be one of many isolated oases surrounded by an infinity of lifeless space--a surreal place where different forces of nature hold sway and particles such as electrons or structures such as carbon atoms and DNA molecules could be impossibilities. If you tried to venture into that outside world, you would cease to be. Thus, string theory gives with the right hand and takes with the left. It was devised in part to explain the seemingly arbitrary values of the physical constants, and the basic equations of the theory contain few arbitrary parameters. Yet so far string theory offers no explanation for the observed values of the constants. A Ruler You Can Trust Indeed, the word "constant" may be a misnomer. Our constants could vary both in time and in space. If the extra dimensions of space were to change in size, the "constants" in our three-dimensional world would change with them. And if we looked far enough out in space, we might begin to see regions where the "constants" have settled into different values. Ever since the 1930s, researchers have speculated that the constants may not be constant. String theory gives this idea a theoretical plausibility and makes it all the more important for observers to search for deviations from constancy. Such experiments are challenging. The first problem is that the laboratory apparatus itself may be sensitive to changes in the constants. The size of all atoms could be increasing, but if the ruler you are using to measure them is getting longer, too, you would never be able to tell. Experimenters routinely assume that their reference standards--rulers, masses, clocks--are fixed, but they cannot do so when testing the constants. They must focus their attention on constants that have no units--they are pure numbers--so that their values are the same irrespective of the units system. An example is the ratio of two masses, such as the proton mass to the electron mass. One ratio of particular interest combines the velocity of light, c, the electric charge on a single electron, e, Planck's constant, h, and the so-called vacuum permittivity, [varepsilon.gif] [0]. This famous quantity, [alpha.gif] = e^2/2 [varepsilon.gif] [0]hc, called the fine-structure constant, was first introduced in 1916 by Arnold Sommerfeld, a pioneer in applying the theory of quantum mechanics to electromagnetism. It quantifies the relativistic (c) and quantum (h) qualities of electromagnetic (e) interactions involving charged particles in empty space ( [varepsilon.gif] [0]). Measured to be equal to 1/137.03599976, or approximately 1/137, [alpha.gif] has endowed the number 137 with a legendary status among physicists (it usually opens the combination locks on their briefcases). If [alpha.gif] had a different value, all sorts of vital features of the world around us would change. If the value were lower, the density of solid atomic matter would fall (in proportion to [alpha.gif] ^3), molecular bonds would break at lower temperatures ( [alpha.gif] ^2), and the number of stable elements in the periodic table could increase (1/ [alpha.gif] ). If were too big, small atomic nuclei could not exist, because the electrical repulsion of their protons would overwhelm the strong nuclear force binding them together. A value as big as 0.1 would blow apart carbon. The nuclear reactions in stars are especially sensitive to [alpha.gif] . For fusion to occur, a star's gravity must produce temperatures high enough to force nuclei together despite their tendency to repel one another. If [alpha.gif] exceeded 0.1, fusion would be impossible (unless other parameters, such as the electron-to-proton mass ratio, were adjusted to compensate). A shift of just 4 percent in would alter the energy levels in the nucleus of carbon to such an extent that the production of this element by stars would shut down. Nuclear Proliferation The second experimental problem, less easily solved, is that measuring changes in the constants requires high-precision equipment that remains stable long enough to register any changes. Even atomic clocks can detect drifts in the fine-structure constant only over days or, at most, years. If [alpha.gif] changed by more than four parts in 10^15 over a three-year period, the best clocks would see it. None have. That may sound like an impressive confirmation of constancy, but three years is a cosmic eyeblink. Slow but substantial changes during the long history of the universe would have gone unnoticed. Fortunately, physicists have found other tests. During the 1970s, scientists from the French atomic energy commission noticed something peculiar about the isotopic composition of ore from a uranium mine at Oklo in Gabon, West Africa: it looked like the waste products of a nuclear reactor. About two billion years ago, Oklo must have been the site of a natural reactor [see "A Natural Fission Reactor," by George A. Cowan; Scientific American, July 1976]. In 1976 Alexander Shlyakhter of the Nuclear Physics Institute in St. Petersburg, Russia, noticed that the ability of a natural reactor to function depends crucially on the precise energy of a particular state of the samarium nucleus that facilitates the capture of neutrons. And that energy depends sensitively on the value of [alpha.gif] . So if the fine-structure constant had been slightly different, no chain reaction could have occurred. But one did occur, which implies that the constant has not changed by more than one part in 10^8 over the past two billion years. (Physicists continue to debate the exact quantitative results because of the inevitable uncertainties about the conditions inside the natural reactor.) In 1962 P. James E. Peebles and Robert Dicke of Princeton University first applied similar principles to meteorites: the abundance ratios arising from the radioactive decay of different isotopes in these ancient rocks depend on [alpha.gif] . The most sensitive constraint involves the beta decay of rhenium into osmium. According to recent work by Keith Olive of the University of Minnesota, Maxim Pospelov of the University of Victoria in British Columbia and their colleagues, at the time the rocks formed, was within two parts in 10^6 of its current value. This result is less precise than the Oklo data but goes back further in time, to the origin of the solar system 4.6 billion years ago. To probe possible changes over even longer time spans, researchers must look to the heavens. Light takes billions of years to reach our telescopes from distant astronomical sources. It carries a snapshot of the laws and constants of physics at the time when it started its journey or encountered material en route. Line Editing Astronomy first entered the constants story soon after the discovery of quasars in 1965. The idea was simple. Quasars had just been discovered and identified as bright sources of light located at huge distances from Earth. Because the path of light from a quasar to us is so long, it inevitably intersects the gaseous outskirts of young galaxies. That gas absorbs the quasar light at particular frequencies, imprinting a bar code of narrow lines onto the quasar spectrum. Whenever gas absorbs light, electrons within the atoms jump from a low energy state to a higher one. These energy levels are determined by how tightly the atomic nucleus holds the electrons, which depends on the strength of the electromagnetic force between them--and therefore on the fine-structure constant. If the constant was different at the time when the light was absorbed or in the particular region of the universe where it happened, then the energy required to lift the electron would differ from that required today in laboratory experiments, and the wavelengths of the transitions seen in the spectra would differ. The way in which the wavelengths change depends critically on the orbital configuration of the electrons. For a given change in [alpha.gif] , some wavelengths shrink, whereas others increase. The complex pattern of effects is hard to mimic by data calibration errors, which makes the test astonishingly powerful. Before we began our work seven years ago, attempts to perform the measurement had suffered from two limitations. First, laboratory researchers had not measured the wavelengths of many of the relevant spectral lines with sufficient precision. Ironically, scientists used to know more about the spectra of quasars billions of light-years away than about the spectra of samples here on Earth. We needed high-precision laboratory measurements against which to compare the quasar spectra, so we persuaded experimenters to undertake them. Initial measurements were done by Anne Thorne and Juliet Pickering of Imperial College London, followed by groups led by Sveneric Johansson of Lund Observatory in Sweden and Ulf Griesmann and Rainer Kling of the National Institute of Standards and Technology in Maryland. The second problem was that previous observers had used so-called alkali-doublet absorption lines--pairs of absorption lines arising from the same gas, such as carbon or silicon. They compared the spacing between these lines in quasar spectra with laboratory measurements. This method, however, failed to take advantage of one particular phenomenon: a change in [alpha.gif] shifts not just the spacing of atomic energy levels relative to the lowest-energy level, or ground state, but also the position of the ground state itself. In fact, this second effect is even stronger than the first. Consequently, the highest precision observers achieved was only about one part in 10^4. In 1999 one of us (Webb) and Victor V. Flambaum of the University of New South Wales in Australia came up with a method to take both effects into account. The result was a breakthrough: it meant 10 times higher sensitivity. Moreover, the method allows different species (for instance, magnesium and iron) to be compared, which allows additional cross-checks. Putting this idea into practice took complicated numerical calculations to establish exactly how the observed wavelengths depend on [alpha.gif] in all different atom types. Combined with modern telescopes and detectors, the new approach, known as the many-multiplet method, has enabled us to test the constancy of [alpha.gif] with unprecedented precision. Changing Minds When embarking on this project, we anticipated establishing that the value of the fine-structure constant long ago was the same as it is today; our contribution would simply be higher precision. To our surprise, the first results, in 1999, showed small but statistically significant differences. Further data confirmed this finding. Based on a total of 128 quasar absorption lines, we found an average increase in [alpha.gif] of close to six parts in a million over the past six billion to 12 billion years. Extraordinary claims require extraordinary evidence, so our immediate thoughts turned to potential problems with the data or the analysis methods. These uncertainties can be classified into two types: systematic and random. Random uncertainties are easier to understand; they are just that--random. They differ for each individual measurement but average out to be close to zero over a large sample. Systematic uncertainties, which do not average out, are harder to deal with. They are endemic in astronomy. Laboratory experimenters can alter their instrumental setup to minimize them, but astronomers cannot change the universe, and so they are forced to accept that all their methods of gathering data have an irremovable bias. For example, any survey of galaxies will tend to be overrepresented by bright galaxies because they are easier to see. Identifying and neutralizing these biases is a constant challenge. The first one we looked for was a distortion of the wavelength scale against which the quasar spectral lines were measured. Such a distortion might conceivably be introduced, for example, during the processing of the quasar data from their raw form at the telescope into a calibrated spectrum. Although a simple linear stretching or compression of the wavelength scale could not precisely mimic a change in [alpha.gif] , even an imprecise mimicry might be enough to explain our results. To test for problems of this kind, we substituted calibration data for the quasar data and analyzed them, pretending they were quasar data. This experiment ruled out simple distortion errors with high confidence. For more than two years, we put up one potential bias after another, only to rule it out after detailed investigation as too small an effect. So far we have identified just one potentially serious source of bias. It concerns the absorption lines produced by the element magnesium. Each of the three stable isotopes of magnesium absorbs light of a different wavelength, but the three wavelengths are very close to one another, and quasar spectroscopy generally sees the three lines blended as one. Based on laboratory measurements of the relative abundances of the three isotopes, researchers infer the contribution of each. If these abundances in the young universe differed substantially--as might have happened if the stars that spilled magnesium into their galaxies were, on average, heavier than their counterparts today--those differences could simulate a change in [alpha.gif] . But a study published this year indicates that the results cannot be so easily explained away. Yeshe Fenner and Brad K. Gibson of Swinburne University of Technology in Australia and Michael T. Murphy of the University of Cambridge found that matching the isotopic abundances to emulate a variation in [alpha.gif] also results in the overproduction of nitrogen in the early universe--in direct conflict with observations. If so, we must confront the likelihood that really has been changing. The scientific community quickly realized the immense potential significance of our results. Quasar spectroscopists around the world were hot on the trail and rapidly produced their own measurements. In 2003 teams led by Sergei Levshakov of the Ioffe Physico-Technical Institute in St. Petersburg, Russia, and Ralf Quast of the University of Hamburg in Germany investigated three new quasar systems. Last year Hum Chand and Raghunathan Srianand of the Inter-University Center for Astronomy and Astrophysics in India, Patrick Petitjean of the Institute of Astrophysics and Bastien Aracil of LERMA in Paris analyzed 23 more. None of these groups saw a change in [alpha.gif] . Chand argued that any change must be less than one part in 10^6 over the past six billion to 10 billion years. How could a fairly similar analysis, just using different data, produce such a radical discrepancy? As yet the answer is unknown. The data from these groups are of excellent quality, but their samples are substantially smaller than ours and do not go as far back in time. The Chand analysis did not fully assess all the experimental and systematic errors--and, being based on a simplified version of the many-multiplet method, might have introduced new ones of its own. One prominent astrophysicist, John Bahcall of Princeton, has criticized the many-multiplet method itself, but the problems he has identified fall into the category of random uncertainties, which should wash out in a large sample. He and his colleagues, as well as a team led by Jeffrey Newman of Lawrence Berkeley National Laboratory, have looked at emission lines rather than absorption lines. So far this approach is much less precise, but in the future it may yield useful constraints. Reforming the Laws If our findings prove to be right, the consequences are enormous, though only partially explored. Until quite recently, all attempts to evaluate what happens to the universe if the fine-structure constant changes were unsatisfactory. They amounted to nothing more than assuming that [alpha.gif] became a variable in the same formulas that had been derived assuming it is a constant. This is a dubious practice. If [alpha.gif] varies, then its effects must conserve energy and momentum, and they must influence the gravitational field in the universe. In 1982 Jacob D. Bekenstein of the Hebrew University of Jerusalem was the first to generalize the laws of electromagnetism to handle inconstant constants rigorously. The theory elevates [alpha.gif] from a mere number to a so-called scalar field, a dynamic ingredient of nature. His theory did not include gravity, however. Four years ago one of us (Barrow), with H?vard Sandvik and Jo?o Magueijo of Imperial College London, extended it to do so. This theory makes appealingly simple predictions. Variations in [alpha.gif] of a few parts per million should have a completely negligible effect on the expansion of the universe. That is because electromagnetism is much weaker than gravity on cosmic scales. But although changes in the fine-structure constant do not affect the expansion of the universe significantly, the expansion affects [alpha.gif] . Changes to [alpha.gif] are driven by imbalances between the electric field energy and magnetic field energy. During the first tens of thousands of years of cosmic history, radiation dominated over charged particles and kept the electric and magnetic fields in balance. As the universe expanded, radiation thinned out, and matter became the dominant constituent of the cosmos. The electric and magnetic energies became unequal, and [alpha.gif] started to increase very slowly, growing as the logarithm of time. About six billion years ago dark energy took over and accelerated the expansion, making it difficult for all physical influences to propagate through space. So [alpha.gif] became nearly constant again. This predicted pattern is consistent with our observations. The quasar spectral lines represent the matter-dominated period of cosmic history, when [alpha.gif] was increasing. The laboratory and Oklo results fall in the dark-energy-dominated period, during which has been constant. The continued study of the effect of changing [alpha.gif] on radioactive elements in meteorites is particularly interesting, because it probes the transition between these two periods. Alpha Is Just the Beginning Any theory worthy of consideration does not merely reproduce observations; it must make novel predictions. The above theory suggests that varying the fine-structure constant makes objects fall differently. Galileo predicted that bodies in a vacuum fall at the same rate no matter what they are made of--an idea known as the weak equivalence principle, famously demonstrated when Apollo 15 astronaut David Scott dropped a feather and a hammer and saw them hit the lunar dirt at the same time. But if [alpha.gif] varies, that principle no longer holds exactly. The variations generate a force on all charged particles. The more protons an atom has in its nucleus, the more strongly it will feel this force. If our quasar observations are correct, then the accelerations of different materials differ by about one part in 10^14--too small to see in the laboratory by a factor of about 100 but large enough to show up in planned missions such as STEP (space-based test of the equivalence principle). There is a last twist to the story. Previous studies of [alpha.gif] neglected to include one vital consideration: the lumpiness of the universe. Like all galaxies, our Milky Way is about a million times denser than the cosmic average, so it is not expanding along with the universe. In 2003 Barrow and David F. Mota of Cambridge calculated that [alpha.gif] may behave differently within the galaxy than inside emptier regions of space. Once a young galaxy condenses and relaxes into gravitational equilibrium, [alpha.gif] nearly stops changing inside it but keeps on changing outside. Thus, the terrestrial experiments that probe the constancy of [alpha.gif] suffer from a selection bias. We need to study this effect more to see how it would affect the tests of the weak equivalence principle. No spatial variations of [alpha.gif] have yet been seen. Based on the uniformity of the cosmic microwave background radiation, Barrow recently showed that [alpha.gif] does not vary by more than one part in 10^8 between regions separated by 10 degrees on the sky. So where does this flurry of activity leave science as far as [alpha.gif] is concerned? We await new data and new analyses to confirm or disprove that varies at the level claimed. Researchers focus on [alpha.gif] , over the other constants of nature, simply because its effects are more readily seen. If [alpha.gif] is susceptible to change, however, other constants should vary as well, making the inner workings of nature more fickle than scientists ever suspected. The constants are a tantalizing mystery. Every equation of physics is filled with them, and they seem so prosaic that people tend to forget how unaccountable their values are. Their origin is bound up with some of the grandest questions of modern science, from the unification of physics to the expansion of the universe. They may be the superficial shadow of a structure larger and more complex than the three-dimensional universe we witness around us. Determining whether constants are truly constant is only the first step on a path that leads to a deeper and wider appreciation of that ultimate vista. From checker at panix.com Sat Jul 2 15:24:45 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:24:45 -0400 (EDT) Subject: [Paleopsych] Sunday Times: Married to a Genius by Jeffrey Meyers Message-ID: Married to a Genius by Jeffrey Meyers http://www.timesonline.co.uk/article/0,,2102-1644797,00.html 5.6.12 REVIEWED BY JOHN CAREY MARRIED TO GENIUS by Jeffrey Meyers Southbank Publishing ?9.99 pp256 Geniuses are traditionally difficult to live with. It is part of their mystique. Disregard for other people is vital for their art, or so they claim. According to D. H. Lawrence, you must have "something vicious in you" to be a writer. Graham Greene said you needed a splinter of ice in your heart. Jeffrey Meyers's sharp-witted book tests these beliefs by examining the marital relationships of nine writers -- Leo Tolstoy, Joseph Conrad, George Bernard Shaw, James Joyce, D. H. Lawrence, Virginia Woolf, Katherine Mansfield, Ernest Hemingway and Scott Fitzgerald. Each study is brilliant and arresting, and they reflect fascinatingly on one another. Meyers has an intricate grasp of modern literature, and has already written full-scale biographies of five of his subjects. Above all, he reveals how subtly writers' lives infiltrate their fiction -- the hardest trick in literary biography. Most judges would choose Tolstoy as the greatest genius of the bunch, and he was by a fair margin the most repellent human being. After a youth of drinking, whoring and gambling, he fell madly in love, at 34, with 18-year-old Sofya Behrs. She seemed "a mere child, a lovely thing", but turned out to be just as pig-headed as he was, and foolish with it. She thought his devotion to the peasantry absurd, while he concluded, from daily observation of her, that there was something wrong with her whole sex -- "Woman is generally stupid." There were furious rows, hysterical fits and suicide attempts. She almost died giving birth to her fifth child, but Tolstoy was offended when she expressed fears about further pregnancies, so breeding continued. Ashamed of his brutal appetites, he maligned the female body ("ugliness, slovenliness, odours") and advocated chastity in his misogynistic Kreutzer Sonata, just as Sofya was giving birth to their 13th. By comparison, George Bernard Shaw's marital arrangements seem almost ideal. A passionless philanderer, frightened of women, he took as his bride the equally frigid Charlotte Payne-Townshend. She was, he said, physically and emotionally like a muffin, but her great attractions were ?4,000 a year, a mighty sum at the time, and a determination never to consummate the marriage -- the last thing Shaw wanted. They managed pretty well for 45 years, and her militant chastity went into the making of St Joan, a subject she suggested and researched. Admittedly, the Shaws' solution would not suit all married couples. Joseph Conrad's was more usual, since he married a substitute mother. Jessie, a former typist, was intellectually undeveloped but excellent at domestic chores. She treated Conrad as a son, calling him "Boy", and nursing him through his shattering depressions. When a real son arrived, Conrad naturally felt displaced, and this led to a strange incident when, on a train with Jessie and their child, he suddenly threw the their bundle of baby clothes out of the window. Tight-lipped, Jessie remarked that when the clothes were found there would be a search for the baby's corpse. Meyers ingeniously deciphers this moment of murderous jealousy as the germ of Conrad's novel The Secret Agent, in which the shady and incompetent Verloc kills his "stepson" Stevie who is the sole object of his wife's love. The happiest marriage in the book, and also the unlikeliest, is James Joyce's to Nora Barnacle. She was a raw, uneducated girl from the west of Ireland, with so little understanding of literature that she thought her husband's writing idiotic, calling him affectionately "simple-minded Jim". Seemingly there was some writerly instinct in Joyce that picked her out as his salvation. An icy, inhibited intellectual, who once described adult sex as "brief, brutal, irresistible and devilish", he needed a woman who could arouse his passion and unlock his guilty store of obscene fantasies. Nora was surprised by the literary outcome. Reading Molly Bloom's reveries at the end of Ulysses she commented, "I guess the man's a genius, but what a dirty mind he has, hasn't he?" Joyce seems to have been an extreme case of what Freud identified as the commonest sexual malady among modern males -- the inability to feel intellectual respect and sexual passion for the same woman. With D. H. Lawrence, though, it was just the opposite. Frieda von Richthofen attracted him because, in addition to her aristocratic lineage, she was aflame with intellectual vivacity and emancipated modern theories that excited his shrinking, puritanical nature. As Meyers shows, her ideas and character appear everywhere in his work. Because they were such opposites, violent hatred complicated their love. They would go at each other hammer-and-tongs in public, pulling hair, punching, screaming abuse, to the embarrassment of their friends. Meyers thinks there was an element of slapstick and self-parody in these set-tos, and for Lawrence they were also a kind of therapy. He would have been bored with a submissive mate. "I must have opposition, something to fight on, or I shall go under," he admitted. Seemingly cruel and outrageous behaviour on Frieda's part, of which there was plenty, may have stemmed from a subliminal realisation of this need. Of course, with Frieda there is also the chance that it was just cruel and outrageous. Woolf and Mansfield were both invalids, as well as geniuses, and needed faithful nursing. Only Woolf got it. Her husband, Leonard, had been a colonial administrator in Ceylon ("ruled India, hung black men, shot tigers", as Virginia airily put it) and Meyers thinks the imperial ethic of duty and self-sacrifice helped him cope with his wife's descents into madness. Their attempts at sexual relations had been "a terrible failure", and were soon abandoned. But he supported her gallantly to the end. Not so Mansfield's husband, John Middleton Murry, who was of a lower class than Leonard Woolf, and appears weak and insecure by comparison. His wife's tuberculosis frightened him, and he stayed in London while she went south, vainly seeking a cure. Meyers judges him harshly, but Mansfield's poor-little-rich-girl bohemianism must have been hard for a well-brought-up, penny-pinching boy to handle, and Murry was probably jealous of her talent, as Leonard was not of Woolf's. The two Americans also make a contrasting pair, Hemingway brutal and exploitative, Fitzgerald feeble, but faithful to his maniacally egotistic wife Zelda. Hemingway's was the simpler case. He tried to force women into the role of passive, devoted creatures, as men had done since the stone age. The Fitzgeralds, by comparison, were disastrously modern -- drunk on fame and money, flaunting their style and beauty as if conforming to some tabloid image of how celebrities should behave, and spiralling into alcoholism and madness. From a literary angle, though, they triumphed. Zelda's tragedy gave Fitzgerald the inspiration for his last great novel, Tender Is the Night, whereas Hemingway's aggressive maleness wrecked his four marriages and his art. Meyers's analyses are always, as here, beautifully clear-cut, but they never lose sight of a truth that H G Wells voiced about the Lawrences' marriage: "The mysteries of human relationships are impenetrably obscure." Available at the Books First price of ?8.49 plus 99p p&p on 0870 165 8585 and [74]www.timesonline.co.uk/booksfirstbuy From checker at panix.com Sat Jul 2 15:31:15 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:31:15 -0400 (EDT) Subject: [Paleopsych] Bulletin of the Atomic Scientists: The Pentagon's psychic friends network Message-ID: The Pentagon's psychic friends network http://www.thebulletin.org/print.php?art_ofn=mj05shermer Bulletin of the Atomic Scientists The Men Who Stare at Goats By Jon Ronson Picador, 2004 278 pages; $24 By Michael Shermer May/June 2005 pp. 60-61 (vol. 61, no. 03) Allison was an attractive Oregonian brunette in a new ageish way, before the new age bloomed in the 1980s. She wore all-natural fibers, flowers in her hair, and nothing on her feet. But what most intrigued me in our year of distance dating were Allison's spiritual gifts. I knew she could see through me metaphorically, but Allison also saw things that she said were not allegorical: body auras, energy chakras, spiritual entities, and light beings. One night she closed the door and turned off the lights in my bathroom and told me to stare into the mirror until my aura appeared. During a drive one evening she pointed out spiritual beings dotting the landscape. I tried to see the world as Allison did, but I couldn't. I was a skeptic, and she was a psychic. This was the age of paranormal proliferation. While a graduate student in experimental psychology, I saw on television the Israeli psychic Uri Geller bend cutlery and reproduce drawings using, so he said, psychic powers alone. Since a number of experimental psychologists had tested Geller and declared him genuine, I began to think that there might be something to it, even if I couldn't personally get with the paranormal program. But then one night I saw the magician James "The Amazing" Randi on Johnny Carson's Tonight Show, replicating with magic everything Geller did. Randi bent spoons, duplicated drawings, levitated tables, and even performed a psychic surgery. When asked about Geller's ability to pass the tests of professional scientists, Randi explained that scientists are not trained to detect trickery and intentional deception, the very art of magic. Randi's right. I vividly recall a seminar that Allison and I attended in which a psychic healer shoved a 10-inch sail needle through his arm with no apparent pain and only a drop of blood. Years later, and to my chagrin, Randi performed the same feat with the simplest of magic. Randi confirmed my skeptical intuitions about all this paranormal piffle, but I always assumed that it was the province of the cultural fringes. Then, in 1995, the story broke that for the previous 25 years the U.S. Army had invested $20 million in a highly secret psychic spy program called Star Gate (also Grill Flame and Scanate), a Cold War project intended to close the "psi gap" (the psychic equivalent of the missile gap) between the United States and Soviet Union. The Soviets were training psychic spies, so we would too. The Men Who Stare at Goats, by British investigative journalist Jon Ronson, is the story of this program, how it started, the bizarre twists and turns it took, and how its legacy carries on today. (Ronson's previous book, Them: Adventures with Extremists, explored the paranoid world of cult-mongers and conspiracy theorists.) In a highly readable narrative style, Ronson takes readers on a Looking Glass-like tour of what U.S. Psychological Operations (PsyOps) forces were researching: invisibility, levitation, telekinesis, walking through walls, and even killing goats just by staring at them (the ultimate goal was killing enemy soldiers telepathically). In one project, psychic spies attempted to use "remote viewing" to identify the location of missile silos, submarines, POWs, and MIAs from a small room in a run-down Maryland building. If these skills could be honed and combined, perhaps military officials could zap remotely viewed enemy missiles in their silos, or so the thinking went. Initially, the Star Gate story received broad media attention (including a spot on ABC's Nightline) and made a few of the psychic spies, such as Ed Dames and Joe McMoneagle, minor celebrities. As regular guests on Art Bell's pro-paranormal radio talk show, the former spies spun tales that, had they not been documented elsewhere, would have seemed like the ramblings of paranoid cultists. (There is even a connection between Dames, Bell, and the Heaven's Gate cult mass suicide in 1997, in which 39 UFO devotees took a permanent "trip" to the mother ship they believed was trailing the Hale-Bopp comet.) But Ronson has brought new depth to the account by carefully tracking down leads, revealing connections, and uncovering previously undisclosed stories. For example, Ronson convincingly connects some of the bizarre torture techniques used on prisoners at Cuba's Guantanamo Bay and at Iraq's Abu Ghraib prison with similar techniques employed during the FBI siege of the Branch Davidians in Waco, Texas. FBI agents blasted the Branch Davidians all night with such obnoxious sounds as screaming rabbits, crying seagulls, dentist drills, and Nancy Sinatra's "These Boots Are Made for Walking." The U.S. military employed the same technique on Iraqi prisoners of war, instead using the theme song from the PBS kids series Barney and Friends--a tune many parents concur does become torturous with repetition. One of Ronson's sources, none other than Geller (of bent-spoon fame), led him to Maj. Gen. Albert Stubblebine III, who directed the psychic spy network from his office in Arlington, Virginia. Stubblebine thought that with enough practice he could learn to walk through walls, a belief encouraged by Lt. Col. Jim Channon, a Vietnam vet whose post-war experiences at such new age meccas as the Esalen Institute in Big Sur, California, led him to found the "first earth battalion" of "warrior monks" and "Jedi knights." These warriors, according to Channon, would transform the nature of war by entering hostile lands with "sparkly eyes," marching to the mantra of "Om," and presenting the enemy with "automatic hugs." Disillusioned by the ugly carnage of modern war, Channon envisioned a battalion armory of machines that would produce "discordant sounds" (Nancy and Barney?) and "psycho-electric" guns that would shoot "positive energy" at enemy soldiers. Although Ronson expresses skepticism throughout his narrative, he avoids the ontological question of whether any of these claims have any basis in reality. That is, can anyone levitate, turn invisible, walk through walls, or remotely view a hidden object? Inquiring minds (scientists) want to know. The answer is an unequivocal no. Under controlled conditions, remote viewers have never succeeded in finding a hidden target with greater accuracy than random guessing. The occasional successes you hear about are due either to chance or to suspect experiment conditions, like when the person who subjectively assesses whether the remote viewer's narrative description seems to match the target already knows the target location and its characteristics. When both the experimenter and the remote viewer are blinded to the target, all psychic powers vanish. Herein lies an important lesson that I have learned in many years of paranormal investigations and that Ronson gleaned in researching his illuminating book: What people remember rarely corresponds to what actually happened. Case in point: A man named Guy Savelli told Ronson that he had seen soldiers kill goats by staring at them, and that he himself had also done so. But as the story unfolds we discover that Savelli is recalling, years later, what he remembers about a particular "experiment" with 30 numbered goats. Savelli randomly chose goat number 16 and gave it his best death stare. But he couldn't concentrate that day, so he quit the experiment, only to be told later that goat number 17 had died. End of story. No autopsy or explanation of the cause of death. No information about how much time had elapsed; the conditions, like temperature, of the room into which the 30 goats had been placed; how long they had been there, and so forth. Since Ronson was skeptical, Savelli triumphantly produced a videotape of another experiment where someone else supposedly stopped the heart of a goat. But the tape showed only a goat whose heart rate dropped from 65 to 55 beats per minute. That was the extent of the empirical evidence of goat killing, and as someone who has spent decades in the same fruitless pursuit of phantom goats, I conclude that the evidence for the paranormal in general doesn't get much better than this. They shoot horses, don't they? Michael Shermer is the publisher of Skeptic magazine (www.skeptic.com), a columnist for Scientific American, and the author of several books, including Why People Believe Weird Things (1997) and Science Friction: Where the Known Meets the Unknown (2005). From checker at panix.com Sat Jul 2 15:31:27 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:31:27 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'The Genius Factory': Test-Tube Superbabies Message-ID: 'The Genius Factory': Test-Tube Superbabies New York Times Book Review, 5.7.3 http://www.nytimes.com/2005/07/03/books/review/03MORRICE.html THE GENIUS FACTORY The Curious History of the Nobel Prize Sperm Bank. By David Plotz. 262 pp. Random House. $24.95. By POLLY MORRICE ''All parents expect too much of their children,'' David Plotz writes in ''The Genius Factory,'' his beguiling account of one man's struggle to ensure that everyone's children -- at least white ones -- would come up to the mark. In our era of rampant parental ambition, of ''aggro soccer dads and home schooling enthusiasts plotting their children's future one spelling bee at a time,'' the cockeyed vision of Robert K. Graham, a California millionaire who sought to create cadres of baby geniuses, seems less bizarre than it probably did in 1980, when Graham's Repository for Germinal Choice, better known as the Nobel Prize sperm bank, opened its doors. Plotz, only 10 at the time, recalls his father's appalled reaction to the notion of using brainiac sperm to spawn wunderkinder: He tried to explain it was ''the sort of thing Hitler would have tried.'' Has Graham's project lost its sinister edge? This is one of two inquiries that Plotz, the deputy editor at Slate, explores in his first book. The reader may conclude Hitler would have been more efficient than Graham. Although Graham's business talents allowed him to parlay his invention of plastic eyeglass lenses into a great fortune, he fumbled the first stage of his grand scheme -- cajoling Nobel winners in science to provide their superior seed to improve America's gene pool. The problem was his showpiece donor: William Shockley, a pioneer of the transistor who shared the 1956 Nobel in physics. Shockley's sperm, ''a superb asset,'' in Graham's view, was the first contribution frozen, color-coded and offered to infertile couples eager to conceive. In this case Graham's natural marketing flair was done in by his knee-jerk adoration of brilliance. For years Shockley had preached that whites were genetically superior to blacks, and he was widely despised. Reporters who might have seen the genius sperm bank as ''well meaning and perhaps even visionary'' perceived it as inseparable from Shockley's racism. It was reviled as a horror and lampooned as a joke, and Nobel donors shunned it. So the Nobel Prize sperm bank produced no Nobel offspring (even Shockley quit donating sperm, fearing his was too aged to beget healthy children). Yet Graham kept the bank in business nearly two decades, with slightly lowered standards for donors. His staff wooed successful scientists and businessmen who were athletic, healthy and tall (Graham discovered American parents were wary of little eggheads). He lured customers by letting them select donors from an irresistible collection of what Plotz calls ''prime cuts of American man.'' By the time the bank closed in 1999, its customers had produced 215 babies, a respectable addition to the national ''germ plasm,'' as Graham might have said. Those children populate the second part of Plotz's story. In a 2001 article in Slate, Plotz sought information from anyone connected with the repository. He soon found himself cast as the ''Semen Detective,'' trying to hook up sperm-bank children and their mothers with the anonymous progenitors. This would be difficult territory for any writer, and Plotz has to reassure himself that none of his confidants wants him to ''go all Oprah.'' No wonder. We meet, for instance, a young man who desperately hopes his biological dad will be a better father than the one who raised him. Plotz's kindness shines through, but some readers may wonder if the book's halves -- explorations of the nature of parenthood and the morality of the Nobel sperm bank -- are coherent. But in the end, the themes mesh. Plotz's meetings with employees, consumers and offspring of the repository, sympathetic people on the whole, may have led him to his understated conclusion that the enterprise wasn't so terrible. For one thing, Graham's inspired strategy of providing consumers a choice of the most desirable men possible freed women from the tyranny of early fertility doctors. And it has become standard industry practice; as Plotz says, ''All sperm banks have become eugenic sperm banks.'' Indeed, reproductive technologies all have eugenic possibilities now, especially preimplantation genetic diagnosis, a means of screening embryos that may one day let parents select the traits they wish for their children. Plotz labels this petri dish micromanagement an instance of ''private eugenics.'' But, he argues, even parents who ''will be lining up for P.G.D. and hoping for a prodigy'' have no use for traditional eugenics, which, in its brutal, negative form, culminated in the Nazis' ''mercy killings'' of those they judged unfit. ''Negative eugenics,'' Plotz says, ''was state-sponsored and brutal. But 'positive' eugenics took a milder approach.'' Graham's version ''sought to increase the number of outstanding people,'' in Plotz's phrase. Is personal eugenics -- producing a superkid for yourself instead of for the master race -- problematic? Plotz suggests the influence of genes is dicey enough and the role of nurture strong enough that we are delusional if we think we can make our children ''what we want them to be, rather than what they are.'' This conclusion, however comforting for parents of teenagers, won't quash everyone's objections. It doesn't address the recent swing toward nature in the old nature vs. nurture debate. Nor does it provide an answer for those who fear that prenatal screening may lead scientists to limit future research on genetic disorders. But Plotz's take on the role of genes now -- in our imaginations and in fact, so far as we can determine that -- is humane and funny, which are fine traits for any argument, or any book. Polly Morrice is writing a book about autism. From checker at panix.com Sat Jul 2 15:31:33 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:31:33 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Three Billion New Capitalists': Consider the Outsource Message-ID: 'Three Billion New Capitalists': Consider the Outsource New York Times Book Review, 5.7.3 http://www.nytimes.com/2005/07/03/books/review/03BLODGET.html [First chapter appended.] THREE BILLION NEW CAPITALISTS The Great Shift of Wealth and Power to the East. By Clyde Prestowitz. 321 pp. Basic Books. $26.95. By HENRY BLODGET IF you've managed to ignore the alarm bells on the outlook for American economic leadership -- and you enjoy dreaming -- don't read Clyde Prestowitz's ''Three Billion New Capitalists: The Great Shift of Wealth and Power to the East.'' It argues that the United States faces such serious fiscal and competitive challenges that we may be headed not only for a declining standard of living but for a 1930's-style depression with a capital D. Here's the story. In the golden age, 1950-73, we had it all -- low-cost manufacturing, rising wages, technological dominance, a highly educated and motivated work force, a trade surplus. Until 1971, our reserve currency was backed by gold, forcing us to be responsible. We had control over our economic destiny. Since then, bit by bit, we've lost much of our strength and are in danger of losing the rest. Our first problem is the surge in competitiveness on the part of the rest of the world, especially China and India, a trend Thomas L. Friedman analyzes in detail in [3]''The World Is Flat.'' Even if the playing field were level -- which it isn't -- we would not be able to compete with the combination of low-cost labor, talent and fire in the belly of these two behemoths. Our second problem is that we still think we're living in the golden age. In fact, we suffer from a misguided sense of superiority, profligate spending habits, a weak education system, mammoth debts, a ballooning trade deficit and a religious devotion to free-trade theories developed before the Industrial Revolution. Each of these issues could consume a book, but Prestowitz, president of the Economic Strategy Institute and a trade negotiator in the Reagan administration, packs them into one. The heart of the question, as he sees it, is that we are not defending the jewel in our economic crown -- our technology and manufacturing capabilities -- but are instead waxing poetic about the virtues of free trade while more practical countries walk off with our loot. This, he contends, will lead to the gutting of our economy, with well-paid skilled jobs replaced by low-paid menial ones, and an America in hock to the world's next economic leaders. Globalization, of course, is nothing new. The ''hollowing out'' debate hinges on whether the United States can replace the jobs it loses with equal or better ones. Capitalism is fueled by Schumpeter's creative destruction -- new forever displacing old -- and this country has thrived through transitions from agriculture to manufacturing to automation to outsourcing to services. Free-trade advocates argue that globalization is just the latest phase of a continuing evolution. Trade hawks like Prestowitz argue that now is different because of the sheer size of India and China and our inadequate response to the new situation. Globalization has always been a touchy subject (after all, Americans lose jobs when companies move production and services overseas) -- so touchy that most popular discussion of it is inflammatory or inane or both. Last year, John Kerry branded corporations and executives who send jobs offshore ''Benedict Arnold companies and C.E.O.'s,'' and a White House adviser, N. Gregory Mankiw, provoked many a storm by suggesting that offshoring was actually beneficial because, among other things, it lowers prices and makes labor available for new opportunities. Mankiw may have been impolitic, but Kerry was just pandering. If the choice is go offshore or go out of business, a chief executive doesn't have a choice. Prestowitz acknowledges that many companies can't survive today without offshoring, but argues that we often abandon industries we could continue to dominate and so lose the ability to lead the next wave of innovation. He lays the blame on government, not the private sector. ''Whether it recognizes the fact or not,'' he declares, ''the United States has a de facto economic strategy, and right now it is to send the country's most important industries overseas.'' He observes, moreover, that the benefits of offshoring go beyond cost: ''You do save money,'' a senior manager at the semiconductor equipment maker KLA-Tencor says about sending work to India. ''But pretty soon, you realize the work is getting done faster and better, and you start sending more and more of it. You also start sending more advanced work and then have to figure out what, if anything, you really don't want to send.'' The work is getting done faster and better, Prestowitz argues, because Indians are not only hungrier than we are, but better educated. China, India, Japan and Europe all churn out more science and engineering degrees than we do. Worse -- and downright embarrassing -- is the state of American education. Globally, our 12th-graders rank only in the 10th percentile in math (that's 10th percentile, not 10th). Our students also rank first in their assessment of their own performance: we're not only poorly prepared, we have delusions of grandeur. One common argument against the hollowing-out theory is that we can afford to lose jobs in low-tech manufacturing because we retain our high-tech design and manufacturing capabilities. Prestowitz counters that China's and India's incentives and resources are so compelling that the high-tech work is leaving, too. Another argument is that a revaluation of the yuan will curb imports and stimulate exports, thus repairing the trade deficit. In fact, Prestowitz asserts, our manufacturing capacity has been so gutted that we can't export our way out, even if the dollar's value drops to zero. The only path is to cut spending. But Prestowitz risks sounding like Chicken Little when he pronounces the globalization of today more than just another ''gale of creative destruction'' to which our economy will eventually adapt. Manufacturing has long been declining as a percentage of the United States economy, but the jobs lost have been more than offset by growth in services (in health care, financial services, law, retailing, and so on). Prestowitz points out that services are now being offshored, too, but not (yet) at a rate threatening our main growth industries. The McKinsey Global Institute, for example, reports that while 24 million Americans switch jobs each year, only 3 million jobs are estimated to go offshore by 2015. The critical question, still to be satisfactorily answered, is whether offshoring produces net economic gain or loss. Prestowitz deconstructs an oft-cited McKinsey study concluding that each $1 of spending sent offshore results in an overall gain in the gross domestic product of $1.12 to $1.14. He points out the study relies on data suggesting that 69 percent of displaced workers found jobs at an average of 97 percent of their former pay. This leaves 31 percent who didn't find new jobs. Not only that, ''if employers took McKinsey's advice to increase their offshoring,'' he says, the gain would quickly become a loss. In America's boom time, government-business cooperation was considered anathema to free-market principles -- ''Politicians shouldn't pick winners and losers!'' In Prestowitz's view, the laissez-faire trade theories of the 19th century have no place in 2005; since he holds that many of our successes have resulted from public-private collaboration, most of his proposals for maintaining American competitiveness boil down to government taking a more active role. Pay teachers more. Help workers move between jobs by offering wage insurance and portable health coverage. Reduce oil consumption by providing incentives for efficient cars (and include S.U.V.'s in mileage regulations). Tax spending, not saving. Help strategic industries with federal loan guarantees and grants. Call ''a new Bretton Woods Conference'' to set steps for reducing the role of the dollar in the world economy and so defuse the trade-deficit bomb. Whatever you think about offshoring, most of these ideas are no-brainers. Henry Blodget, a former securities analyst, writes frequently for Slate and New York magazine. ------------- First chapter of 'Three Billion New Capitalists' http://www.nytimes.com/2005/07/03/books/chapters/0703-1st-prest.html By CLYDE PRESTOWITZ As the headlines listed in the prologue attest, many world figures now fear that a crisis scenario may no longer be a fantasy. American leaders are not concerned. None other than former Secretary of State Colin Powell recently told the Atlantic Monthly that, "The United States cannot be touched in this generation by anyone in terms of military power, economic power, the strength of our political system, and our values system." There are good reasons for Powell's confidence. With just 5 percent of the world's population, the United States accounts for over 30 percent of its production and almost 40 percent of its consumption. At $11 trillion, America's gross domestic product is more than twice as big as that of the next largest national economy, and its real per capita income is far above that of any other major country. Its language, American English, is the language of commerce worldwide, and the U.S. dollar is the world's money. Go anywhere in the world and people will tell you how much something costs in dollars and will accept dollars without hesitation. Indeed, Americans have a special privilege in this regard: whereas others must first earn dollars in order to buy oil or wheat or Toyotas on the international market, Americans only need to print more dollars. Of the world's 1,000 largest corporations, 423 are American, and the New York and Nasdaq stock exchanges account for 44 percent of the value of all the stocks in the world. The United States is home to the world's finest universities and the overwhelming majority of its leading research centers, and it spends more on research and development than the next five countries combined. It is, quite simply, the richest, most powerful nation the world has ever seen. Americans long ago adopted the view that helping the rest of the world get rich is good for America. And thus, for the past half century, the United States has-through the process of globalization-orchestrated the growing integration of national economies to create an international exchange of goods, services, money, technology, and people. The results have been as intended and expected. This globalization has largely been directed by America, but it has enhanced American wealth and power by enabling others, particularly our allies, to flourish. It was this process, not military threats, that won the Cold War by lifting billions in the free world out of poverty and creating centers of wealth and power around the planet. In Asia, Japan became the world's second largest economy; other countries like Singapore and South Korea flourished so greatly that they became known as "tigers." Across the Atlantic, the European Union grew from six to twenty-five countries and introduced the euro, the first common European currency since Roman times. In Latin America, Mexico has attracted huge foreign investment by becoming a virtual extension of the American economy, and Brazil is flourishing by dint of American and other foreign investment. Although American corporations initially led globalization, they are no longer the only or even the dominant players. Sony, Nokia, Cemex, and Samsung are just a few of the growing numbers of non-American companies that have become global household names. Of course, American influence has not disappeared. American music, clothing styles, sports stars, and movies are not the only entries, but they set the pace, as have Silicon Valley entrepreneurs and the Nobel Prize winners of great American universities like MIT, Harvard, Stanford, and Caltech. Some people and countries have been uncomfortable with the American flavor of this system and have criticized globalization as a euphemism for Americanization. Yet they have found it hard to resist-one of McDonald's most successful restaurants being on the Champs Elys?es in Paris, for example. In the end everybody seems to want to join, and in fact almost everybody has joined. "Globalization" was an odd term to use during the Cold War because half the world was socialist or communist and not playing. A citizen of the communist bloc who dared to even suggest playing risked being purged (or worse) as a hated "capitalist roader." Over the past two decades, however, China, India, and the former Soviet Union all decided to leave their respective socialist workers' paradises and drive with their combined 3 billion citizens onto the once despised capitalist road. Although these people are mostly poor, the number having an advanced education and sophisticated skills is larger than the populations of many first world countries. They are arriving on the scene in the context of revolutionary changes. A series of global treaties, concluded largely at American behest, has dramatically lowered trade and investment barriers, making the old rutted capitalist road a lot smoother. With contract manufacturers that can produce anywhere in the world and express delivery companies like FedEx and UPS that can deliver anywhere in the world in thirty-six hours, the road has become a highway. Finally, the global deployment of the Internet negated time and distance for transactions that can be done in bits instead of atoms. Now the highway is a high-speed capitalist raceway, and those 3 billion new people driving on it are, effectively, in your office and living room, and you are in theirs. All of this has generated a whole new wave and model of globalization that is turning the world upside down. The global economic system was designed during the Cold War to attract these newcomers to capitalism, but no one actually anticipated that they would join or what their absorption into it would mean. Although this new wave of globalization has many potential advantages for everyone, it also poses serious challenges. It comes at a time when a fundamental flaw in the international economic structure has combined with American self-indulgence and Asian mercantilism to stress the system and make it vulnerable. The irony here is that the winners of the Cold War were less prepared for victory than the losers were for defeat. Thus the impact of the new wave, if not handled carefully, could bring the whole system crashing down. They Can't Move the Snow to India It was in the winter of 2003 that my oldest son, Chummy, gave me my first glimpse of the powerful forces being unleashed by the new capitalists and how they might interact with the old system and structures. We were skiing on the north side of Lake Tahoe in California, where he lives. On the lift he asked if I would consider coinvesting with him in a local snow-removal company. "What do you mean by snow removal?" I asked, somewhat surprised because my son is a high-level software developer. "Well," he explained, "the company has contracts to plow the parking lots and access roads of the hotels and vacation condominiums around here whenever it snows, and that happens pretty frequently between November and May." "But what on earth are you doing," I exclaimed, "going into something as mundane as snow plowing?" "Dad," he said, "they can't move the snow to India." It took a minute for that to sink in. It had never occurred to me that my son had anything to fear from India or anywhere else in terms of his career path. It was I, after all, who had advised him to go into computer science, secure in the knowledge that it would put him in a position to write his own ticket. When I asked if his job was in any danger, he thought it unlikely but noted that "outsourcing" is the new management buzzword. "You can never be sure," he said, "that some MBA hotshot with little knowledge of the technology but a big need to impress top management with his or her sophistication won't decide to move the whole operation offshore to India or elsewhere." My son further explained that all the big consulting and service firms like Bearing Point, IBM, Deloitte, and others were making daily pitches to top management on how much they could save by outsourcing to India. After asking about the snow removal company's financial status and agreeing to put in a few dollars, I decided to add India (where I hadn't been in twenty years) to the countries in Asia I was scheduled to visit over the next four weeks. At my first stop in Tokyo, discussion centered almost exclusively on China. The tone of the talk was somewhat schizophrenic. Several years ago the Japanese had feared being "hollowed out" as China took over production of steel, machine tools, and electronic components, but now they were talking of China as an opportunity. They even spoke of China possibly replacing America as the world's growth engine and of Japan orienting itself more toward China and less toward America. They were proud of their corporate and national strategy for maintaining a strong manufacturing base that allowed them-unlike the United States, which they said had little to sell-to capitalize on the China boom. Yet in the hon-ne, or real truth, of quiet conversation after a few drinks, Japanese corporate and government leaders alike wondered how Japan would be able to compete with China in the future. In Beijing and Shanghai I was struck again, as I have always been during my visits over the past twenty years, by the rapidity of China's continuing modernization. Stay away from China for six months and you no longer recognize the place when you return. As I took the twelve-minute ride from the airport to downtown on Shanghai's new maglev bullet train, I couldn't help thinking how nice it would be to have something like this in America. That thought recurred over the next few days as I made my rounds of factories, government offices, consulting firms, and think tanks. By now everyone knows that China is the world's location of choice for low-cost commodity manufacturing. But what I kept hearing and seeing was that it is also rapidly becoming the location of choice for high-tech manufacturing and even research and development. This impression was greatly strengthened by my visit with old friends at Motorola in Beijing. In the 1980s, as the U.S. trade deficit began to soar, Motorola was a prime leader in an effort to ensure continued high-tech production in the United States through a coordinated industry-government program to improve U.S. high-tech competitiveness. Now, I was told, Motorola had just moved a big part of its manufacturing and R&D to China. I winged on to Singapore, where I was scheduled to meet with the senior minister and father of his country, Lee Kuan Yew. I knew that Lee, having foreseen that China would displace Singapore as a low-cost manufacturing location, had been urging a new high-tech and service-oriented strategy for the now wealthy and high-cost city-state. How did he view the future? With concern, was the answer. China was moving much faster than even he had anticipated, and India's domination of services was completely unexpected. In India, after a tour of Delhi, Hyderabad, Chennai, and Bangalore, I realized I was seeing a revolution-a different, more exciting, and more challenging future than I had imagined. In the "accent neutralization" classes at call center training schools, I listened to English-speaking Indian young people learn to sound like people in Kansas or Ottawa. Thus, if you're a customer of Dell Computer or United Airlines or some other U.S. company phoning a call center to get tech support or make reservations for a trip, you will think you're talking to someone across town or in another American city; you won't realize that India is at the other end of your line. In Hyderabad I met with Raju Ramalinga, the founder of Indian infotech services provider Satyam, and I listened as he explained how in 1972 he had started sending programmers to U.S. clients for limited software writing contracts. Now, at their request, he has taken over complete management of those clients' back offices all over the world. By doing the work at the Satyam campus outside of town, he cuts client costs by 70 percent. In Bangalore I saw 1800 Indians with Ph.D.s in electrical engineering and computer science designing Intel's latest chips. Again, the cost savings were huge; more importantly, Intel couldn't find the same number of equally qualified people in the United States. In Chennai I visited the new biotech industrial park to be directed by Krishna Ella, a University of Michigan Ph.D. who, after several years at the leading edge of biotechnology in the United States, has come home to India, where costs are 20 percent of those in the U.S. market. By the end of my tour, I understood my son's interest in snow removal. I also understood why the notion of outsourcing was sending shivers down the spines of millions of formerly secure upper-middle-class professionals who were beginning to appreciate how blue-collar workers feel about visiting the unemployment office. I flew home via Frankfurt and Paris. On the Lufthansa flight from Delhi, I read the cover story in Der Spiegel about whether, in response to global competition, Europeans could bring themselves to change from their current thirty-five-hour work week to a forty-hour one. After what I had seen over the past three weeks, the question seemed trivial. Can Europe survive? is more appropriate, I thought. But then I remembered that the maglev trains in Shanghai were built in Europe, that Finland has a trade surplus with China, and in Europe my cell phone would work everywhere, instead of only in certain locations, as it does in the United States. At Charles de Gaulle Airport in Paris, I bought a pile of newspapers and magazines for the flight to Washington Dulles. The Guardian of London had a front-page story about how the deficit-ridden British National Health Service was thinking about air-expressing blood samples to India for analysis to save money. Lab results would be returned via e-mail. As I arrived in Washington, tax time was fast approaching. So I booked a quick appointment with my tax accountant at a medium-size local firm. As we chatted about my expenses, donations, and deductions, I happened to mention that I was just back from India. "India!" he exclaimed. "We just did a deal to move our whole data processing operation to Bangalore. Your taxes will actually be calculated there." He explained that the move was saving the firm 80 percent on its processing costs. (I wondered why my bill was not being reduced, but that's another book.) That night I phoned my daughter to let her know I was back and to get caught up on the grandchildren. . . . From anonymous_animus at yahoo.com Sat Jul 2 18:09:45 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Sat, 2 Jul 2005 11:09:45 -0700 (PDT) Subject: [Paleopsych] mythology and the mind In-Reply-To: <200507021800.j62I0dR01579@tick.javien.com> Message-ID: <20050702180945.8406.qmail@web30803.mail.mud.yahoo.com> >>Why is there is an eternal return of certain mythic themes in religion, such as messiah myths, flood myths, creation myths, destruction myths, redemption myths, and end of the world myths? What do these recurring themes tell us about the workings of the human mind and culture?<< --Maybe that the mind operates in some ways like a fractal, and that social patterns of dominance, appeasement and sacrifice are projected onto nature automatically, given the lack of a more accurate model. Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From ljohnson at solution-consulting.com Sat Jul 2 20:14:45 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Sat, 02 Jul 2005 14:14:45 -0600 Subject: [Paleopsych] Bulletin of the Atomic Scientists: The Pentagon's psychic friends network In-Reply-To: References: Message-ID: <42C6F5B5.5040006@solution-consulting.com> Frank, thanks for passing this on. Surely you will not go to hell, disbelief notwithstanding. As I have come to expect from Shermer, the key argument is an outright lie, namely that there is no statistical support for remote viewing. See Jessica Utts' report, and contrast with Hyman's report which he decided upon before seeing the evidence. (Hyman is another skeptic who cannot be convinced by evidence). Even Hyman admits that the statistical evidence is very strong, but he still undermined the program. http://en.wikipedia.org/wiki/Remote_viewing http://anson.ucdavis.edu/~utts/air2.html Utts' own abstract says: "Using the standards applied to any other area of science, it is concluded that psychic functioning has been well established. The statistical results of the studies examined are far beyond what is expected by chance. Arguments that these results could be due to methodological flaws in the experiments are soundly refuted. Effects of similar magnitude to those found in government-sponsored research at SRI and SAIC have been replicated at a number of laboratories across the world. Such consistency cannot be readily explained by claims of flaws or fraud." I have the greatest of respect for the scientific method, and I am always disappointed by Shermer and his ilk that distort and lie in order to support their strict materialistic paradigm. It is inadequate to explain the data, so let's not collect any more data. "Sit down before fact like a small child, and be prepared to give up every preconceived notion and follow wherever and to whatever abyss nature lead, or you will learn nothing." -- T. S. Huxley Other than that, I have no strong feelings. Lynn Premise Checker wrote: > The Pentagon's psychic friends network > http://www.thebulletin.org/print.php?art_ofn=mj05shermer > Bulletin of the Atomic Scientists > > The Men Who Stare at Goats > By Jon Ronson > Picador, 2004 > 278 pages; $24 > > By Michael Shermer > May/June 2005 pp. 60-61 (vol. 61, no. 03) > > Allison was an attractive Oregonian brunette in a new ageish way, > before the new age bloomed in the 1980s. She wore all-natural fibers, > flowers in her hair, and nothing on her feet. But what most intrigued > me in our year of distance dating were Allison's spiritual gifts. I > knew she could see through me metaphorically, but Allison also saw > things that she said were not allegorical: body auras, energy > chakras, > spiritual entities, and light beings. One night she closed the door > and turned off the lights in my bathroom and told me to stare into > the > mirror until my aura appeared. During a drive one evening she pointed > out spiritual beings dotting the landscape. I tried to see the world > as Allison did, but I couldn't. I was a skeptic, and she was a > psychic. > > This was the age of paranormal proliferation. While a graduate > student > in experimental psychology, I saw on television the Israeli psychic > Uri Geller bend cutlery and reproduce drawings using, so he said, > psychic powers alone. Since a number of experimental psychologists > had > tested Geller and declared him genuine, I began to think that there > might be something to it, even if I couldn't personally get with the > paranormal program. But then one night I saw the magician James "The > Amazing" Randi on Johnny Carson's Tonight Show, replicating with > magic > everything Geller did. Randi bent spoons, duplicated drawings, > levitated tables, and even performed a psychic surgery. When asked > about Geller's ability to pass the tests of professional scientists, > Randi explained that scientists are not trained to detect trickery > and > intentional deception, the very art of magic. Randi's right. I > vividly > recall a seminar that Allison and I attended in which a psychic > healer > shoved a 10-inch sail needle through his arm with no apparent pain > and > only a drop of blood. Years later, and to my chagrin, Randi performed > the same feat with the simplest of magic. > > Randi confirmed my skeptical intuitions about all this paranormal > piffle, but I always assumed that it was the province of the cultural > fringes. Then, in 1995, the story broke that for the previous 25 > years > the U.S. Army had invested $20 million in a highly secret psychic spy > program called Star Gate (also Grill Flame and Scanate), a Cold War > project intended to close the "psi gap" (the psychic equivalent of > the > missile gap) between the United States and Soviet Union. The Soviets > were training psychic spies, so we would too. The Men Who Stare at > Goats, by British investigative journalist Jon Ronson, is the > story of > this program, how it started, the bizarre twists and turns it took, > and how its legacy carries on today. (Ronson's previous book, Them: > Adventures with Extremists, explored the paranoid world of > cult-mongers and conspiracy theorists.) > > In a highly readable narrative style, Ronson takes readers on a > Looking Glass-like tour of what U.S. Psychological Operations > (PsyOps) > forces were researching: invisibility, levitation, telekinesis, > walking through walls, and even killing goats just by staring at them > (the ultimate goal was killing enemy soldiers telepathically). In one > project, psychic spies attempted to use "remote viewing" to identify > the location of missile silos, submarines, POWs, and MIAs from a > small > room in a run-down Maryland building. If these skills could be honed > and combined, perhaps military officials could zap remotely viewed > enemy missiles in their silos, or so the thinking went. > > Initially, the Star Gate story received broad media attention > (including a spot on ABC's Nightline) and made a few of the psychic > spies, such as Ed Dames and Joe McMoneagle, minor celebrities. As > regular guests on Art Bell's pro-paranormal radio talk show, the > former spies spun tales that, had they not been documented elsewhere, > would have seemed like the ramblings of paranoid cultists. (There is > even a connection between Dames, Bell, and the Heaven's Gate cult > mass > suicide in 1997, in which 39 UFO devotees took a permanent "trip" to > the mother ship they believed was trailing the Hale-Bopp comet.) > > But Ronson has brought new depth to the account by carefully tracking > down leads, revealing connections, and uncovering previously > undisclosed stories. For example, Ronson convincingly connects > some of > the bizarre torture techniques used on prisoners at Cuba's Guantanamo > Bay and at Iraq's Abu Ghraib prison with similar techniques employed > during the FBI siege of the Branch Davidians in Waco, Texas. FBI > agents blasted the Branch Davidians all night with such obnoxious > sounds as screaming rabbits, crying seagulls, dentist drills, and > Nancy Sinatra's "These Boots Are Made for Walking." The U.S. military > employed the same technique on Iraqi prisoners of war, instead using > the theme song from the PBS kids series Barney and Friends--a tune > many parents concur does become torturous with repetition. > > One of Ronson's sources, none other than Geller (of bent-spoon fame), > led him to Maj. Gen. Albert Stubblebine III, who directed the psychic > spy network from his office in Arlington, Virginia. Stubblebine > thought that with enough practice he could learn to walk through > walls, a belief encouraged by Lt. Col. Jim Channon, a Vietnam vet > whose post-war experiences at such new age meccas as the Esalen > Institute in Big Sur, California, led him to found the "first earth > battalion" of "warrior monks" and "Jedi knights." These warriors, > according to Channon, would transform the nature of war by entering > hostile lands with "sparkly eyes," marching to the mantra of "Om," > and > presenting the enemy with "automatic hugs." Disillusioned by the ugly > carnage of modern war, Channon envisioned a battalion armory of > machines that would produce "discordant sounds" (Nancy and Barney?) > and "psycho-electric" guns that would shoot "positive energy" at > enemy > soldiers. > > Although Ronson expresses skepticism throughout his narrative, he > avoids the ontological question of whether any of these claims have > any basis in reality. That is, can anyone levitate, turn invisible, > walk through walls, or remotely view a hidden object? Inquiring minds > (scientists) want to know. The answer is an unequivocal no. Under > controlled conditions, remote viewers have never succeeded in finding > a hidden target with greater accuracy than random guessing. The > occasional successes you hear about are due either to chance or to > suspect experiment conditions, like when the person who subjectively > assesses whether the remote viewer's narrative description seems to > match the target already knows the target location and its > characteristics. When both the experimenter and the remote viewer are > blinded to the target, all psychic powers vanish. > > Herein lies an important lesson that I have learned in many years of > paranormal investigations and that Ronson gleaned in researching his > illuminating book: What people remember rarely corresponds to what > actually happened. Case in point: A man named Guy Savelli told Ronson > that he had seen soldiers kill goats by staring at them, and that he > himself had also done so. But as the story unfolds we discover that > Savelli is recalling, years later, what he remembers about a > particular "experiment" with 30 numbered goats. Savelli randomly > chose > goat number 16 and gave it his best death stare. But he couldn't > concentrate that day, so he quit the experiment, only to be told > later > that goat number 17 had died. End of story. No autopsy or explanation > of the cause of death. No information about how much time had > elapsed; > the conditions, like temperature, of the room into which the 30 goats > had been placed; how long they had been there, and so forth. Since > Ronson was skeptical, Savelli triumphantly produced a videotape of > another experiment where someone else supposedly stopped the heart of > a goat. But the tape showed only a goat whose heart rate dropped from > 65 to 55 beats per minute. > > That was the extent of the empirical evidence of goat killing, and as > someone who has spent decades in the same fruitless pursuit of > phantom > goats, I conclude that the evidence for the paranormal in general > doesn't get much better than this. They shoot horses, don't they? > > Michael Shermer is the publisher of Skeptic magazine > (www.skeptic.com), a columnist for Scientific American, and the > author > of several books, including Why People Believe Weird Things (1997) > and > Science Friction: Where the Known Meets the Unknown (2005). > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > > From waluk at earthlink.net Sun Jul 3 00:51:18 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Sat, 02 Jul 2005 17:51:18 -0700 Subject: [Paleopsych] mythology and the mind In-Reply-To: <20050702180945.8406.qmail@web30803.mail.mud.yahoo.com> References: <20050702180945.8406.qmail@web30803.mail.mud.yahoo.com> Message-ID: <42C73686.9060601@earthlink.net> Gerry says: Most literature students are aware that there is one basic concept called "conflict" and this results in 7 plots found in all literature. These are: 1. *Man vs. Nature* as in /Tarzan /, /Robinson Crusoe /, /The Call Of The Wild / and /Moby Dick /. 2. *Man vs. Man *exemplified by /Shane /, /Othello /, and /Les Miserables /. 3. *Man vs. Environment *found in Dickens - /Oliver Twist / or /David Copperfield /, for example. 4. *Man vs. God *such as Hermann Hesse 's /Siddhartha / and the classic /Zen And The Art Of Motorcycle Maintenance /. For more overt battles with the heavens, see Homer 's /The Odyssey / or the Book of Job in The Bible . 5. *Man vs. Supernatural *as in H.G. Wells ' /War Of The Worlds / and Washington Irving 's /The Legend Of Sleepy Hollow /. Often instead the supernatural in turn act as a catalyst for other conflict - William Peter Blatty 's /The Exorcist / causes Father Mike to question himself, and Edgar Allan Poe 's /The Tell-Tale Heart / uses the spectral beating of a dead man's heart to illustrate a murderer's descent into madness . 6. *Man vs. Self. *Having now conquered all things that man cannot directly control - nature, God, other men, his environment, and the supernatural - he now finds that he must not be in conflict with himself in order to attain happiness . Sometimes these conflicts can be desperately dark and painful - /Requiem For A Dream /'s sordid display of addiction and Hamlet 's suicidal thoughts over the anguish of his mother's betrayal and father's death are eerie in that they touch close to home about the suffering of life. Other books which center on this conflict include Salinger 's /The Catcher In The Rye /, Christopher Marlowe 's /Faust /, Virginia Woolf 's /The Voyage Out /, Wharton 's /Ethan Frome / and John Updike 's /Rabbit, Run /. 7. *Man vs. Machine. *For some unseemly reason, once man has conquered the things he cannot control, and has mastered his own self, he is still unsatisfied. His stasis is immediately dropped so that he may invent new things with which he can conflict. One can only wonder if man is doomed to conflict by its very recidivism , or if in some sad masochistic existentialism , the reason we spend so much time analyzing and writing about (and, in this case, creating) our conflicts is that to be is to suffer. As the wise Buddha said, "All is suffering." Still, it seems almost maddening to think that we were not content with the struggles listed before, but have since added machines to our list. The battle with the machines usually arises out of a dystopia that occurs as appearance and reality are blurred. Of course, the first real exploration of this conflict lay in a novel based on the first invention of ourselves - Mary Shelley 's /Frankenstein /. Some other excellent pieces on this include Arthur C. Clarke 's /2001: A Space Odyssey /, Philip K. Dick 's /Man, Android, and Machine /, and Kokaku Kidoutai 's 1995 film /Ghost In The Shell /. >>>>>Why is there an eternal return of certain mythic themes in religion, such as messiah myths, flood >>>myths, creation myths, destruction myths, redemption >>>myths, and end of the world myths? What do these >>>recurring themes tell us about the workings of the >>>human mind and culture?<< >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From HowlBloom at aol.com Sun Jul 3 13:29:51 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Sun, 3 Jul 2005 09:29:51 EDT Subject: [Paleopsych] why do we sleep? Message-ID: <24.7420d603.2ff9424f@aol.com> I found this, stopped what I was doing, filched it, wrote a few sentences on it for you, and here it is. Just something to niggle at your brain while it niggles at mine... The evolutionary reason for sleep and for the dream for flying are two of the most intriguing unanswered mysteries faced by modern psychology. If the work of sleep researchers like J. Alan Hobson and William Dement give you the feeling that sleep is at least one area of study we can afford to pause and take a nap about, think again. Don?t even bother to think. Just ponder this simple eye-opener: ?Dolphins sleep with one-half of the brain at a time, closing one eye while floating or swimming about.? Does that jar you awake? It certainly snaps me to attention. Now the question is this. Why DO we sleep? (And why do we dream of flying?) Does anyone have hard research or persuasive anecdote on this...aside from the usual suspects, like we sleep to digest the learning from experiences of the day? Howard Here?s the article this comes from: Retrieved July 3, 2005, from the World Wide Web http://www.sciencenews.org/articles/20050702/fob1.asp Science News Online Week of July 2, 2005; Vol. 168, No. 1 Sleepless in SeaWorld: Some newborns and moms forgo slumber Naila Moreira Orca-whale and dolphin mothers and their newborns appear not to sleep for a month after the pups' birth, researchers report. Neither parent nor offspring shows any ill effects from the long waking stint, and the animals don't later compensate with extra sleep. a6302_1551.jpg UP WITH THE BABY. An orca-whale mother and her newborn pup may forgo sleep for several weeks before adopting a normal pattern. Dolphins also exhibit this behavior. SeaWorld, San Diego No previously studied mammal stays awake for so long, says Jerry Siegel of the University of California, Los Angeles (UCLA), an investigator in the study. In the months following their wakeful period, baby whales and dolphins?and their mothers?ramped up slowly to sleep amounts typical of normal adults, Siegel and his colleagues report. The infants' sleep pattern contrasts with that of other mammals, which need extra sleep during infancy and gradually sleep less as they age. Oleg Lyamin, also of UCLA, started observing an orca mother and her baby just after it was born at SeaWorld, San Diego. Orcas usually snooze for 5 to 8 hours a night, closing both eyes and floating motionlessly. The SeaWorld orca mother and baby, Lyamin found, neither shut their eyes nor remained motionless. Instead, the animals were constantly active, with the infant surfacing for a breath every 30 seconds. The researchers made similar observations of another SeaWorld orca mom and baby. The team also watched dolphins at the Utrish Dolphinarium in Moscow. Dolphins sleep with one-half of the brain at a time, closing one eye while floating or swimming about. The team observed no sleeping behavior in the first month after birth among four dolphin mom-baby pairs. The findings, reported in the June 30 Nature, challenge prevailing notions of the purpose of sleep, some researchers say. "We're under the belief that if you don't get sleep, you can't perform, and you're at risk for developing all sorts of disorders," says Paul Shaw of Washington University in St. Louis. For instance, rats die after being deprived of sleep for just 2 weeks. The UCLA data are "the beginning of a change in the way we view sleep," says Shaw. Scientists have commonly hypothesized that people and other animals require sleep for brain development and learning (SN: 6/1/02, p. 341: http://www.sciencenews.org/articles/20020601/fob6.asp). "Here we have a developing [whale or dolphin] youngster with no evidence of sleep," says Irene Tobler of ETH-Zurich in Switzerland. "It will revolutionize many people's ways of thinking." Siegel argues that sleep is not required for brain development in these and other young animals and instead plays some role as yet unknown. Alternatively, whales and dolphins may have evolved unusual compensatory mechanisms that permit them to develop without sleep, while other animals still require sleep for brain development, Tobler says. Robert Stickgold of Harvard University suggests that mother and baby whales and dolphins may have evolved an unusual form of sleeping. "A sleepwalker makes it down the stairs, into the kitchen, into the refrigerator quite well while a [brain wave] recording says they're in deep sleep," he notes. Stickgold says that such recordings from the animals could help determine whether the orcas and dolphins are awake. Siegel speculates that mothers and babies of both species need constant activity to survive. The mother pushes the baby to the surface to breathe at regular intervals. Also, the baby must stay warm in cold water while it develops its blubber coat. "The mystery is that they're ... dispensing with sleep behavior when so many sleep researchers have assumed that sleep has a vital function," Siegel says. If you have a comment on this article that you would like considered for publication in Science News, send it to editors at sciencenews.org. Please include your name and location. To subscribe to Science News (print), go to https://www.kable.com/pub/scnw/ subServices.asp. To sign up for the free weekly e-LETTER from Science News, go to http://www.sciencenews.org/pages/subscribe_form.asp. References: 2005. No sleep in the deep: Unlike other mammals, newborn dolphins and killer whales stay active 24/7 during first months of development. University of California, Los Angeles press release. June 29. Available at http://www.newsroom.ucla.edu/page.asp?RelNum=6274. Lyamin, O. . . . and J. Siegel. 2005. Animal behaviour: Continuous activity in cetaceans after birth. Nature 435(June 30):1177. Abstract available at http://dx.doi.org/10.1038/4351177a. Further Readings: Bower, B. 2002. Snooze power: Midday nap may awaken learning potential. Science News 161(June 1):341. Available at http://www.sciencenews.org/articles/20020601/fob6.asp. Brownlee, C. 2005. Losing sleep: Mutant flies need less shut-eye. Science News 167(April 30):275. Available at http://www.sciencenews.org/articles/20050430/fob2.asp. Hesman, T. 2000. Fly naps inspire dreams of sleep genetics. Science News 157(Feb. 19):117. Available at http://www.sciencenews.org/articles/20000219/fob4.asp. Milius, S. 2004. Sparrows cheat on sleep: Migratory birds are up at night but still stay sharp. Science News 166(July 17):38. Available at http://www.sciencenews.org/articles/20040717/fob7.asp. Sources: Paul Shaw Anatomy and Neurobiology Washington University School of Medicine 660 S. Euclid Avenue Campus Box 8108 St. Louis, MO 63110 Jerry Siegel Psychiatry and Biobehavioral Sciences Center for Sleep Research Neurobiology Research 151A3 VA GLAHS Sepulveda 16111 Plummer Street North Hills, CA 91343 Robert Stickgold Center for Sleep and Cognition Harvard Medical School Beth Israel Deaconess Medical Center E/FD861 330 Brookline Avenue Boston, MA 02115 Irene Tobler Institute of Pharmacology and Toxicology University of Zurich Winterthurerstrasse 190 CH-8057 Zurich Switzerland http://www.sciencenews.org/articles/20050702/fob1.asp From Science News, Vol. 168, No. 1, July 2, 2005, p. 3. Copyright (c) 2005 Science Service. All rights reserved. ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From HowlBloom at aol.com Sun Jul 3 14:17:53 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Sun, 3 Jul 2005 10:17:53 EDT Subject: [Paleopsych] when the body runs riot Message-ID: <111.4d660222.2ff94d91@aol.com> The body has good reasons for giving us inflammations, or so we?ve been told by evolutionary biology. Inflammation can quarantine micro-attackers and help heal our wounds. But if the swollen redness of inflammation is so useful, why does the body have built in mechanisms to keep it under control? Those mechanisms are, according to the article below, ?epoxyeicosatrienoic acids (EETs)?. These acids keep a good thing from happening. They rein in inflammation. Could the answer be something that the work of paleopsych member Neil Greenberg taught me a long time ago? In moderate doses things the body?s own internally-concocted remedies are good. In overdoses they can be poisons. Stress hormones are examples. In swift, sharp jolts, they are pick-me-ups, attention, strength, and energy boosters extremely useful in fast but vicious fights or when it?s time to turn tail and skedaddle, escape. But in chronic doses, doses that go on and on and on and on from day to day and week to week, those same stress hormones, those quick-hit tonics, are poisons. Does the body need inflammation inhibitors like epoxyeicosatrienoic acids to make sure that it doesn?t get too much of a good thing? Are these inhibitors part of the same sort of checks and balances that Sherringtonian nerves use when they are finely tuned by an excitatory signals and a counterbalance of inhibitory signals? Are they like the extensor and tensor muscles, the upper bicep balanced against the muscle that faces it on the underside, the muscle below your arm from your shoulder to your elbow? Snip the bottom muscle and your hand will fly up to your shoulder and stay there, stuck in place by the unchecked enthusiasm of the muscle on top. Cut the top muscle, and your arm will look like an unbending stick. Your elbow and forearm will be permanently locked in position. And are the acids that inhibit inflammation a bit like the part of the brain that does the most to civilize us, the most to make us ?human??the pre-frontal cortex? You?d think that the human part of the brain would be there to throw us into hyper-gear and turbocharge, giving us the mental warp engines we need to break the consciousness barrier and rocket into thought. But, no. The prefrontal cortex does the opposite. It?s a brake, a drag-chute, an inhibitor. Without the ?human? part of the brain, that three pound lump of pink stuff in our head would apparently do too much, not too little. It takes restraint to make us human. When Aristotle said that life is really a balancing act between extremes, he may have gotten it far more right than he ever imagined. Howard Retrieved July 3, 2005, from the World Wide Web http://www.sciencenews.org/articles/20050702/fob2.asp Running Interference: Fresh approach to fighting inflammation Nathan Seppa The more scientists learn about inflammation, the less they like it. Although this bodily process speeds wound healing and corrals microbes, it can also do plenty of harm, as seen in people with arthritis, asthma, and a host of other ailments. Unfortunately, today's anti-inflammatory drugs pose their own problems. They cause stomach distress in many people, and some drugs seem to hike the risk of heart attacks. So, the search for a safe inflammation fighter goes on. Bruce D. Hammock, a biochemist at the University of California, Davis, and his colleagues now report that two experimental drugs shield lab mice from extreme inflammation. The findings appear in an upcoming Proceedings of the National Academy of Sciences. Earlier research had suggested that a troublesome enzyme, called soluble epoxide hydrolase, degrades natural inflammation inhibitors known as epoxyeicosatrienoic acids (EETs). ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rauh at uconn.edu Sun Jul 3 14:39:34 2005 From: christian.rauh at uconn.edu (Christian Rauh) Date: Sun, 03 Jul 2005 10:39:34 -0400 Subject: [Paleopsych] when the body runs riot In-Reply-To: <111.4d660222.2ff94d91@aol.com> References: <111.4d660222.2ff94d91@aol.com> Message-ID: <42C7F8A6.90301@uconn.edu> Robust and stable dynamic systems need control mechanisms for up and down regulation to avoid positive feedback loops that spiral out of the safety margins. Christian HowlBloom at aol.com wrote: > The body has good reasons for giving us inflammations, or so we?ve been > told by evolutionary biology. Inflammation can quarantine > micro-attackers and help heal our wounds. But if the swollen redness of > inflammation is so useful, why does the body have built in mechanisms to > keep it under control? > > > > Those mechanisms are, according to the article below, > ?epoxyeicosatrienoic acids (EETs)?. These acids keep a good thing from > happening. They rein in inflammation. > > > > Could the answer be something that the work of paleopsych member Neil > Greenberg taught me a long time ago? In moderate doses things the > body?s own internally-concocted remedies are good. In overdoses they > can be poisons. Stress hormones are examples. In swift, sharp jolts, > they are pick-me-ups, attention, strength, and energy boosters extremely > useful in fast but vicious fights or when it?s time to turn tail and > skedaddle, escape. But in chronic doses, doses that go on and on and on > and on from day to day and week to week, those same stress hormones, > those quick-hit tonics, are poisons. > > > > Does the body need inflammation inhibitors like epoxyeicosatrienoic > acids to make sure that it doesn?t get too much of a good thing? Are > these inhibitors part of the same sort of checks and balances that > Sherringtonian nerves use when they are finely tuned by an excitatory > signals and a counterbalance of inhibitory signals? Are they like the > extensor and tensor muscles, the upper bicep balanced against the muscle > that faces it on the underside, the muscle below your arm from your > shoulder to your elbow? Snip the bottom muscle and your hand will fly > up to your shoulder and stay there, stuck in place by the unchecked > enthusiasm of the muscle on top. Cut the top muscle, and your arm will > look like an unbending stick. Your elbow and forearm will be > permanently locked in position. > > > > And are the acids that inhibit inflammation a bit like the part of the > brain that does the most to civilize us, the most to make us ?human??the > pre-frontal cortex? You?d think that the human part of the brain would > be there to throw us into hyper-gear and turbocharge, giving us the > mental warp engines we need to break the consciousness barrier and > rocket into thought. But, no. The prefrontal cortex does the > opposite. It?s a brake, a drag-chute, an inhibitor. Without the > ?human? part of the brain, that three pound lump of pink stuff in our > head would apparently do too much, not too little. It takes restraint > to make us human. > > > > When Aristotle said that life is really a balancing act between > extremes, he may have gotten it far more right than he ever imagined. > Howard > > > > Retrieved July 3, 2005, from the World Wide Web > > http://www.sciencenews.org/articles/20050702/fob2.asp > > > > Running Interference: Fresh approach to fighting inflammation > > > > Nathan Seppa > > > > The more scientists learn about inflammation, the less they like it. > Although this bodily process speeds wound healing and corrals microbes, > it can also do plenty of harm, as seen in people with arthritis, asthma, > and a host of other ailments. Unfortunately, today's anti-inflammatory > drugs pose their own problems. They cause stomach distress in many > people, and some drugs seem to hike the risk of heart attacks. So, the > search for a safe inflammation fighter goes on. > > > > Bruce D. Hammock, a biochemist at the University of California, Davis, > and his colleagues now report that two experimental drugs shield lab > mice from extreme inflammation. The findings appear in an upcoming > Proceedings of the National Academy of Sciences. > > > > Earlier research had suggested that a troublesome enzyme, called soluble > epoxide hydrolase, degrades natural inflammation inhibitors known as > epoxyeicosatrienoic acids (EETs). > > > ---------- > Howard Bloom > Author of The Lucifer Principle: A Scientific Expedition Into the Forces > of History and Global Brain: The Evolution of Mass Mind From The Big > Bang to the 21st Century > Recent Visiting Scholar-Graduate Psychology Department, New York > University; Core Faculty Member, The Graduate Institute > www.howardbloom.net > www.bigbangtango.net > Founder: International Paleopsychology Project; founding board member: > Epic of Evolution Society; founding board member, The Darwin Project; > founder: The Big Bang Tango Media Lab; member: New York Academy of > Sciences, American Association for the Advancement of Science, American > Psychological Society, Academy of Political Science, Human Behavior and > Evolution Society, International Society for Human Ethology; advisory > board member: Institute for Accelerating Change ; executive editor -- > New Paradigm book series. > For information on The International Paleopsychology Project, see: > www.paleopsych.org > for two chapters from > The Lucifer Principle: A Scientific Expedition Into the Forces of > History, see www.howardbloom.net/lucifer > For information on Global Brain: The Evolution of Mass Mind from the Big > Bang to the 21st Century, see www.howardbloom.net > > > ------------------------------------------------------------------------ > > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych -- ????????????????????????????????????????????????????????????????????? ~ I G N O R A N C E ~ The trouble with ignorance is precisely that if a person lacks virtue and knowledge, he's perfectly satisfied with the way he is. If a person isn't aware of a lack, he can not desire the thing which he isn't aware of lacking. Symposium (204a), Plato _____________________________________________________________________ ????????????????????????????????????????????????????????????????????? From shovland at mindspring.com Sun Jul 3 14:41:00 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 3 Jul 2005 07:41:00 -0700 Subject: [Paleopsych] when the body runs riot Message-ID: <01C57FA2.8FB1F080.shovland@mindspring.com> Nicholas Perricone MD has a lot to say about the inflammation-disease connection, and lots of ideas for dealing with it. Steve Hovland www.stevehovland.net -----Original Message----- From: HowlBloom at aol.com [SMTP:HowlBloom at aol.com] Sent: Sunday, July 03, 2005 7:18 AM To: paleopsych at paleopsych.org Subject: [Paleopsych] when the body runs riot << File: ATT00000.txt; charset = UTF-8 >> << File: ATT00001.html; charset = UTF-8 >> << File: ATT00002.txt >> From checker at panix.com Sun Jul 3 15:02:18 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 11:02:18 -0400 (EDT) Subject: [Paleopsych] Science: What Don't We Know? (125th anniversary issue) Message-ID: What Don't We Know? -- Kennedy and Norman 309 (5731): 75 -- Science http://www.sciencemag.org/cgi/content/summary/sci;309/5731/75 et seq. [All articles included. Read carefully. I'd like to know if there will be a different answer to the question, "How much can be boost i.q. and scholastic achievement." Actually, there was very little here touching upon the social sciences or social issues.] Introduction to special issue What Don't We Know? Donald Kennedy and Colin Norman At Science, we tend to get excited about new discoveries that lift the veil a little on how things work, from cells to the universe. That puts our focus firmly on what has been added to our stock of knowledge. For this anniversary issue, we decided to shift our frame of reference, to look instead at what we don't know: the scientific puzzles that are driving basic scientific research. We began by asking Science's Senior Editorial Board, our Board of Reviewing Editors, and our own editors and writers to suggest questions that point to critical knowledge gaps. The ground rules: Scientists should have a good shot at answering the questions over the next 25 years, or they should at least know how to go about answering them. We intended simply to choose 25 of these suggestions and turn them into a survey of the big questions facing science. But when a group of editors and writers sat down to select those big questions, we quickly realized that 25 simply wouldn't convey the grand sweep of cutting-edge research that lies behind the responses we received. So we have ended up with 125 questions, a fitting number for Science's 125th anniversary. First, a note on what this special issue is not: It is not a survey of the big societal challenges that science can help solve, nor is it a forecast of what science might achieve. Think of it instead as a survey of our scientific ignorance, a broad swath of questions that scientists themselves are asking. As Tom Siegfried puts it in his introductory essay, they are "opportunities to be exploited." We selected 25 of the 125 questions to highlight based on several criteria: how fundamental they are, how broad-ranging, and whether their solutions will impact other scientific disciplines. Some have few immediate practical implications--the composition of the universe, for example. Others we chose because the answers will have enormous societal impact--whether an effective HIV vaccine is feasible, or how much the carbon dioxide we are pumping into the atmosphere will warm our planet, for example. Some, such as the nature of dark energy, have come to prominence only recently; others, such as the mechanism behind limb regeneration in amphibians, have intrigued scientists for more than a century. We listed the 25 highlighted questions in no special order, but we did group the 100 additional questions roughly by discipline. Our sister online publications are also devoting special issues to Science's 125th anniversary. The Science of Aging Knowledge Environment, SAGE KE ([32]www.sageke.org), is surveying several big questions confronting researchers on aging. The Signal Transduction Knowledge Environment, STKE ([33]www.stke.org), has selected classic Science articles that have had a high impact in the field of cell signaling and is highlighting them in an editorial guide. And Science's Next Wave ([34]www.nextwave.org) is looking at the careers of scientists grappling with some of the questions Science has identified. We are acutely aware that even 125 unknowns encompass only a partial answer to the question that heads this special section: What Don't We Know? So we invite you to participate in a special forum on Science's Web site ([35]www.sciencemag.org/sciext/eletters/125th), in which you can comment on our 125 questions or nominate topics we missed--and we apologize if they are the very questions you are working on. -------------- How Hot Will the Greenhouse World Be? Richard A. Kerr Scientists know that the world has warmed lately, and they believe humankind is behind most of that warming. But how far might we push the planet in coming decades and centuries? That depends on just how sensitively the climate system--air, oceans, ice, land, and life--responds to the greenhouse gases we're pumping into the atmosphere. For a quarter-century, expert opinion was vague about climate sensitivity. Experts allowed that climate might be quite touchy, warming sharply when shoved by one climate driver or another, such as the carbon dioxide from fossil fuel burning, volcanic debris, or dimming of the sun. On the other hand, the same experts conceded that climate might be relatively unresponsive, warming only modestly despite a hard push toward the warm side. The problem with climate sensitivity is that you can't just go out and directly measure it. Sooner or later a climate model must enter the picture. Every model has its own sensitivity, but each is subject to all the uncertainties inherent in building a hugely simplified facsimile of the real-world climate system. As a result, climate scientists have long quoted the same vague range for sensitivity: A doubling of the greenhouse gas carbon dioxide, which is expected to occur this century, would eventually warm the world between a modest 1.5?C and a whopping 4.5?C. This range--based on just two early climate models--first appeared in 1979 and has been quoted by every major climate assessment since. [37]Figure 1 A harbinger? Coffins being lined up during the record-breaking 2003 heat wave in Europe. Researchers are finally beginning to tighten up the range of possible sensitivities, at least at one end. For one, the sensitivities of the available models (5% to 95% confidence range) are now falling within the canonical range of 1.5?C to 4.5?C; some had gone considerably beyond the high end. And the first try at a new approach--running a single model while varying a number of model parameters such as cloud behavior--has produced a sensitivity range of 2.4?C to 5.4?Cwith a most probable value of 3.2?C. Models are only models, however. How much better if nature ran the experiment? Enter paleoclimatologists, who sort out how climate drivers such as greenhouse gases have varied naturally in the distant past and how the climate system of the time responded. Nature, of course, has never run the perfect analog for the coming greenhouse warming. And estimating how much carbon dioxide concentrations fell during the depths of the last ice age or how much sunlight debris from the eruption of Mount Pinatubo in the Philippines blocked will always have lingering uncertainties. But paleoclimate estimates of climate sensitivity generally fall in the canonical range, with a best estimate in the region of 3?C. The lower end at least of likely climate sensitivity does seem to be firming up; it's not likely below 1.5?C, say researchers. That would rule out the negligible warmings proposed by some greenhouse contrarians. But climate sensitivity calculations still put a fuzzy boundary on the high end. Studies drawing on the past century's observed climate change plus estimates of natural and anthropogenic climate drivers yield up to 30% probabilities of sensitivities above 4.5?C, ranging as high as 9?C. The latest study that varies model parameters allows sensitivities up to 11?C, with the authors contending that they can't yet say what the chances of such extremes are. Others are pointing to times of extreme warmth in the geologic past that climate models fail to replicate, suggesting that there's a dangerous element to the climate system that the models do not yet contain. Climate researchers have their work cut out for them. They must inject a better understanding of clouds and aerosols--the biggest sources of uncertainty--into their modeling. Ten or 15 years ago, scientists said that would take 10 or 15 years; there's no sign of it happening anytime soon. They must increase the fidelity of models, a realistic goal given the continued acceleration of affordable computing power. And they must retrieve more and better records of past climate changes and their drivers. Meanwhile, unless a rapid shift away from fossil fuel use occurs worldwide, a doubling of carbon dioxide--and more--will be inevitable. _________________________________________________________________ What Can Replace Cheap Oil--and When? Richard A. Kerr and Robert F. Service The road from old to new energy sources can be bumpy, but the transitions have gone pretty smoothly in the past. After millennia of dependence on wood, society added coal and gravitydriven water to the energy mix. Industrialization took off. Oil arrived, and transportation by land and air soared, with hardly a worry about where the next log or lump of coal was coming from, or what the explosive growth in energy production might be doing to the world. Times have changed. The price of oil has been climbing, and ice is melting around both poles as the mercury in the global thermometer rises. Whether the next big energy transition will be as smooth as past ones will depend in large part on three sets of questions: When will world oil production peak? How sensitive is Earth's climate to the carbon dioxide we are pouring into the atmosphere by burning fossil fuels? And will alternative energy sources be available at reasonable costs? The answers rest on science and technology, but how society responds will be firmly in the realm of politics. There is little disagreement that the world will soon be running short of oil. The debate is over how soon. Global demand for oil has been rising at 1% or 2% each year, and we are now sucking almost 1000 barrels of oil from the ground every second. Pessimists--mostly former oil company geologists--expect oil production to peak very soon. They point to American geologist M. King Hubbert's successful 1956 prediction of the 1970 peak in U.S. production. Using the same method involving records of past production and discoveries, they predict a world oil peak by the end of the decade. Optimists--mostly resource economists--argue that oil production depends more on economics and politics than on how much happens to be in the ground. Technological innovation will intervene, and production will continue to rise, they say. Even so, midcentury is about as far as anyone is willing to push the peak. That's still "soon" considering that the United States, for one, will need to begin replacing oil's 40% contribution to its energy consumption by then. And as concerns about climate change intensify, the transition to nonfossil fuels could become even more urgent (see p. [38]100). If oil supplies do peak soon or climate concerns prompt a major shift away from fossil fuels, plenty of alternative energy supplies are waiting in the wings. The sun bathes Earth's surface with 86,000 trillion watts, or terawatts, of energy at all times, about 6600 times the amount used by all humans on the planet each year. Wind, biomass, and nuclear power are also plentiful. And there is no shortage of opportunities for using energy more efficiently. Of course, alternative energy sources have their issues. Nuclear fission supporters have never found a noncontroversial solution for disposing of long-lived radioactive wastes, and concerns over liability and capital costs are scaring utility companies off. Renewable energy sources are diffuse, making it difficult and expensive to corral enough power from them at cheap prices. So far, wind is leading the way with a global installed capacity of more than 40 billion watts, or gigawatts, providing electricity for about 4.5 cents per kilowatt hour. That sounds good, but the scale of renewable energy is still very small when compared to fossil fuel use. In the United States, renewables account for just 6% of overall energy production. And, with global energy demand expected to grow from approximately 13 terawatts a year now to somewhere between 30 and 60 terawatts by the middle of this century, use of renewables will have to expand enormously to displace current sources and have a significant impact on the world's future energy needs. What needs to happen for that to take place? Using energy more efficiently is likely to be the sine qua non of energy planning--not least to buy time for efficiency improvements in alternative energy. The cost of solar electric power modules has already dropped two orders of magnitude over the last 30 years. And most experts figure the price needs to drop 100-fold again before solar energy systems will be widely adopted. Advances in nanotechnology may help by providing novel semiconductor systems to boost the efficiency of solar energy collectors and perhaps produce chemical fuels directly from sunlight, CO[2], and water. But whether these will come in time to avoid an energy crunch depends in part on how high a priority we give energy research and development. And it will require a global political consensus on what the science is telling us. _________________________________________________________________ Will Malthus Continue to Be Wrong? Erik Stokstad In 1798, a 32-year-old curate at a small parish church in Albury, England, published a sobering pamphlet entitled An Essay on the Principle of Population. As a grim rebuttal of the utopian philosophers of his day, Thomas Malthus argued that human populations will always tend to grow and, eventually, they will always be checked--either by foresight, such as birth control, or as a result of famine, war, or disease. Those speculations have inspired many a dire warning from environmentalists. Since Malthus's time, world population has risen sixfold to more than 6 billion. Yet happily, apocalyptic collapses have mostly been prevented by the advent of cheap energy, the rise of science and technology, and the green revolution. Most demographers predict that by 2100, global population will level off at about 10 billion. The urgent question is whether current standards of living can be sustained while improving the plight of those in need. Consumption of resources--not just food but also water, fossil fuels, timber, and other essentials--has grown enormously in the developed world. In addition, humans have compounded the direct threats to those resources in many ways, including by changing climate (see p. [37]100), polluting land and water, and spreading invasive species. How can humans live sustainably on the planet and do so in a way that manages to preserve some biodiversity? Tackling that question involves a broad range of research for natural and social scientists. It's abundantly clear, for example, that humans are degrading many ecosystems and hindering their ability to provide clean water and other "goods and services" (Science, 1 April, p. [38]41). But exactly how bad is the situation? Researchers need better information on the status and trends of wetlands, forests, and other areas. To set priorities, they'd also like a better understanding of what makes ecosystems more resistant or vulnerable and whether stressed ecosystems, such as marine fisheries, have a threshold at which they won't recover. [39]Figure 1 Out of balance. Sustaining a growing world population is threatened by inefficient consumption of resources--and by poverty. Agronomists face the task of feeding 4 billion more mouths. Yields may be maxing out in the developed world, but much can still be done in the developing world, particularly sub-Saharan Africa, which desperately needs more nitrogen. Although agricultural biotechnology clearly has potential to boost yields and lessen the environmental impact of farming, it has its own risks, and winning over skeptics has proven difficult. There's no shortage of work for social scientists either. Perverse subsidies that encourage overuse of resources--tax loopholes for luxury Hummers and other inefficient vehicles, for example--remain a chronic problem. A new area of activity is the attempt to place values on ecosystems' services, so that the price of clear-cut lumber, for instance, covers the loss of a forest's ability to provide clean water. Incorporating those "externalities" into pricing is a daunting challenge that demands much more knowledge of ecosystems. In addition, economic decisions often consider only net present value and discount the future value of resources--soil erosion, slash-and-burn agriculture, and the mining of groundwater for cities and farming are prime examples. All this complicates the process of transforming industries so that they provide jobs, goods, and services while damaging the environment less. Researchers must also grapple with the changing demographics of housing and how it will impact human well-being: In the next 35 to 50 years, the number of people living in cities will double. Much of the growth will likely happen in the developing world in cities that currently have 30,000 to 3 million residents. Coping with that huge urban influx will require everything from energy efficient ways to make concrete to simple ways to purify drinking water. And in an age of global television and relentless advertising, what will happen to patterns of consumption? The world clearly can't support 10 billion people living like Americans do today. Whether science--both the natural and social sciences--and technology can crank up efficiency and solve the problems we've created is perhaps the most critical question the world faces. Mustering the political will to make hard choices is, however, likely to be an even bigger challenge. _________________________________________________________________ In Praise of Hard Questions Tom Siegfried[37]* Great cases, as U.S. Supreme Court Justice Oliver Wendell Holmes suggested a century ago, may make bad law. But great questions often make very good science. Unsolved mysteries provide science with motivation and direction. Gaps in the road to scientific knowledge are not potholes to be avoided, but opportunities to be exploited. "Fundamental questions are guideposts; they stimulate people," says 2004 Nobel physics laureate David Gross. "One of the most creative qualities a research scientist can have is the ability to ask the right questions." Science's greatest advances occur on the frontiers, at the interface between ignorance and knowledge, where the most profound questions are posed. There's no better way to assess the current condition of science than listing the questions that science cannot answer. "Science," Gross declares, "is shaped by ignorance." There have been times, though, when some believed that science had paved over all the gaps, ending the age of ignorance. When Science was born, in 1880, James Clerk Maxwell had died just the year before, after successfully explaining light, electricity, magnetism, and heat. Along with gravity, which Newton had mastered 2 centuries earlier, physics was, to myopic eyes, essentially finished. Darwin, meanwhile, had established the guiding principle of biology, and Mendeleyev's periodic table--only a decade old--allowed chemistry to publish its foundations on a poster board. Maxwell himself mentioned that many physicists believed the trend in their field was merely to measure the values of physical constants "to another place of decimals." Nevertheless, great questions raged. Savants of science debated not only the power of natural selection, but also the origin of the solar system, the age and internal structure of Earth, and the prospect of a plurality of worlds populating the cosmos. In fact, at the time of Maxwell's death, his theory of electromagnetic fields was not yet widely accepted or even well known; experts still argued about whether electricity and magnetism propagated their effects via "action at a distance," as gravity (supposedly) did, or by Michael Faraday's "lines of force" (incorporated by Maxwell into his fields). Lurking behind that dispute was the deeper issue of whether gravity could be unified with electromagnetism (Maxwell thought not), a question that remains one of the greatest in science today, in a somewhat more complicated form. Maxwell knew full well that his accomplishments left questions unanswered. His calculations regarding the internal motion of molecules did not agree with measurements of specific heats, for instance. "Something essential to the complete state of the physical theory of molecular encounters must have hitherto escaped us," he commented. When Science turned 20--at the 19th century's end--Maxwell's mentor William Thomson (Lord Kelvin) articulated the two grand gaps in knowledge of the day. (He called them "clouds" hanging over physicists' heads.) One was the mystery of specific heats that Maxwell had identified; the other was the failure to detect the ether, a medium seemingly required by Maxwell's electromagnetic waves. Filling those two gaps in knowledge required the 20th century's quantum and relativity revolutions. The ignorance enveloped in Kelvin's clouds was the impetus for science's revitalization. Throughout the last century, pursuing answers to great questions reshaped human understanding of the physical and living world. Debates over the plurality of worlds assumed galactic proportions, specifically addressing whether Earth's home galaxy, the Milky Way, was only one of many such conglomerations of stars. That issue was soon resolved in favor of the Milky Way's nonexclusive status, in much the same manner that Earth itself had been demoted from its central role in the cosmos by Copernicus centuries before. But the existence of galaxies outside our own posed another question, about the apparent motions of those galaxies away from one another. That issue echoed a curious report in Science's first issue about a set of stars forming a triangular pattern, with a double star at the apex and two others forming the base. Precise observations showed the stars to be moving apart, making the triangle bigger but maintaining its form. "It seems probable that all these stars are slowly moving away from one common point, so that many years back they were all very much closer to one another," Science reported, as though the four stars had all begun their journey from the same place. Understanding such motion was a question "of the highest interest." A half a century later, Edwin Hubble enlarged that question from one about stellar motion to the origin and history of the universe itself. He showed that galaxies also appeared to be receding from a common starting point, evidence that the universe was expanding. With Hubble's discovery, cosmology's grand questions began to morph from the philosophical to the empirical. And with the discovery of the cosmic microwave background in the 1960s, the big bang theory of the universe's birth assumed the starring role on the cosmological stage--providing cosmologists with one big answer and many new questions. By Science's centennial, a quarter-century ago, many gaps still remained in knowledge of the cosmos; some of them have since been filled, while others linger. At that time debate continued over the existence of planets around faraway stars, a question now settled with the discovery of dozens of planets in the solar system's galactic neighborhood. But now a bigger question looms beyond the scope of planets or even galaxies: the prospect of multiple universes, cousins to the bubble of time and space that humans occupy. And not only may the human universe not be alone (defying the old definition of universe), humans may not be alone in their own space, either. The possible existence of life elsewhere in the cosmos remains as great a gap as any in present-day knowledge. And it is enmeshed with the equally deep mystery of life's origin on Earth. Life, of course, inspires many deep questions, from the prospects for immortality to the prognosis for eliminating disease. Scientists continue to wonder whether they will ever be able to create new life forms from scratch, or at least simulate life's self-assembling capabilities. Biologists, physicists, mathematicians, and computer scientists have begun cooperating on a sophisticated "systems biology" aimed at understanding how the countless molecular interactions at the heart of life fit together in the workings of cells, organs, and whole animals. And if successful, the systems approach should help doctors tailor treatments to individual variations in DNA, permitting personalized medicine that deters disease without inflicting side effects. Before Science turns 150, revamped versions of modern medicine may make it possible for humans to live that long, too. As Science and science age, knowledge and ignorance have coevolved, and the nature of the great questions sometimes changes. Old questions about the age and structure of the Earth, for instance, have given way to issues concerning the planet's capacity to support a growing and aging population. Some great questions get bigger over time, encompassing an ever-expanding universe, or become more profound, such as the quest to understand consciousness. On the other hand, many deep questions drive science to smaller scales, more minute than the realm of atoms and molecules, or to a greater depth of detail underlying broad-brush answers to past big questions. In 1880, some scientists remained unconvinced by Maxwell's evidence for atoms. Today, the analogous debate focuses on superstrings as the ultimate bits of matter, on a scale a trillion trillion times smaller. Old arguments over evolution and natural selection have descended to debates on the dynamics of speciation, or how particular behaviors, such as altruistic cooperation, have emerged from the laws of individual competition. Great questions themselves evolve, of course, because their answers spawn new and better questions in turn. The solutions to Kelvin's clouds--relativity and quantum physics--generated many of the mysteries on today's list, from the composition of the cosmos to the prospect for quantum computers. Ultimately, great questions like these both define the state of scientific knowledge and drive the engines of scientific discovery. Where ignorance and knowledge converge, where the known confronts the unknown, is where scientific progress is most dramatically made. "Thoroughly conscious ignorance," wrote Maxwell, "is the prelude to every real advance in science." So when science runs out of questions, it would seem, science will come to an end. But there's no real danger of that. The highway from ignorance to knowledge runs both ways: As knowledge accumulates, diminishing the ignorance of the past, new questions arise, expanding the areas of ignorance to explore. Maxwell knew that even an era of precision measurements is not a sign of science's end but preparation for the opening of new frontiers. In every branch of science, Maxwell declared, "the labor of careful measurement has been rewarded by the discovery of new fields of research and by the development of new scientific ideas." If science's progress seems to slow, it's because its questions get increasingly difficult, not because there will be no new questions left to answer. Fortunately, hard questions also can make great science, just as Justice Holmes noted that hard cases, like great cases, made bad law. Bad law resulted, he said, because emotional concerns about celebrated cases exerted pressures that distorted well-established legal principles. And that's why the situation in science is the opposite of that in law. The pressures of the great, hard questions bend and even break well-established principles, which is what makes science forever self-renewing--and which is what demolishes the nonsensical notion that science's job will ever be done. __________________________________________ Tom Siegfried is the author of Strange Matters and The Bit and the Pendulum. _________________________________________________________________ What Is the Universe Made Of? Charles Seife Every once in a while, cosmologists are dragged, kicking and screaming, into a universe much more unsettling than they had any reason to expect. In the 1500s and 1600s, Copernicus, Kepler, and Newton showed that Earth is just one of many planets orbiting one of many stars, destroying the comfortable Medieval notion of a closed and tiny cosmos. In the 1920s, Edwin Hubble showed that our universe is constantly expanding and evolving, a finding that eventually shattered the idea that the universe is unchanging and eternal. And in the past few decades, cosmologists have discovered that the ordinary matter that makes up stars and galaxies and people is less than 5% of everything there is. Grappling with this new understanding of the cosmos, scientists face one overriding question: What is the universe made of? This question arises from years of progressively stranger observations. In the 1960s, astronomers discovered that galaxies spun around too fast for the collective pull of the stars' gravity to keep them from flying apart. Something unseen appears to be keeping the stars from flinging themselves away from the center: unilluminated matter that exerts extra gravitational force. This is dark matter. Over the years, scientists have spotted some of this dark matter in space; they have seen ghostly clouds of gas with x-ray telescopes, watched the twinkle of distant stars as invisible clumps of matter pass in front of them, and measured the distortion of space and time caused by invisible mass in galaxies. And thanks to observations of the abundances of elements in primordial gas clouds, physicists have concluded that only 10% of ordinary matter is visible to telescopes. [37]Figure 1 In the dark. Dark matter holds galaxies together; supernovae measurements point to a mysterious dark energy. But even multiplying all the visible "ordinary" matter by 10 doesn't come close to accounting for how the universe is structured. When astronomers look up in the heavens with powerful telescopes, they see a lumpy cosmos. Galaxies don't dot the skies uniformly; they cluster together in thin tendrils and filaments that twine among vast voids. Just as there isn't enough visible matter to keep galaxies spinning at the right speed, there isn't enough ordinary matter to account for this lumpiness. Cosmologists now conclude that the gravitational forces exerted by another form of dark matter, made of an as-yet-undiscovered type of particle, must be sculpting these vast cosmic structures. They estimate that this exotic dark matter makes up about 25% of the stuff in the universe--five times as much as ordinary matter. But even this mysterious entity pales by comparison to another mystery: dark energy. In the late 1990s, scientists examining distant supernovae discovered that the universe is expanding faster and faster, instead of slowing down as the laws of physics would imply. Is there some sort of antigravity force blowing the universe up? All signs point to yes. Independent measurements of a variety of phenomena--cosmic background radiation, element abundances, galaxy clustering, gravitational lensing, gas cloud properties--all converge on a consistent, but bizarre, picture of the cosmos. Ordinary matter and exotic, unknown particles together make up only about 30% of the stuff in the universe; the rest is this mysterious anti-gravity force known as dark energy. This means that figuring out what the universe is made of will require answers to three increasingly difficult sets of questions. What is ordinary dark matter made of, and where does it reside? Astrophysical observations, such as those that measure the bending of light by massive objects in space, are already yielding the answer. What is exotic dark matter? Scientists have some ideas, and with luck, a dark-matter trap buried deep underground or a high-energy atom smasher will discover a new type of particle within the next decade. And finally, what is dark energy? This question, which wouldn't even have been asked a decade ago, seems to transcend known physics more than any other phenomenon yet observed. Ever-better measurements of supernovae and cosmic background radiation as well as planned observations of gravitational lensing will yield information about dark energy's "equation of state"--essentially a measure of how squishy the substance is. But at the moment, the nature of dark energy is arguably the murkiest question in physics--and the one that, when answered, may shed the most light. _________________________________________________________________ So Much More to Know ... From the nature of the cosmos to the nature of societies, the following 100 questions span the sciences. Some are pieces of questions discussed above; others are big questions in their own right. Some will drive scientific inquiry for the next century; others may soon be answered. Many will undoubtedly spawn new questions. Is ours the only universe? A number of quantum theorists and cosmologists are trying to figure out whether our universe is part of a bigger "multiverse." But others suspect that this hard-to-test idea may be a question for philosophers. What drove cosmic inflation? In the first moments after the big bang, the universe blew up at an incredible rate. But what did the blowing? Measurements of the cosmic microwave background and other astrophysical observations are narrowing the possibilities. When and how did the first stars and galaxies form? The broad brush strokes are visible, but the fine details aren't. Data from satellites and ground-based telescopes may soon help pinpoint, among other particulars, when the first generation of stars burned off the hydrogen "fog" that filled the universe. Where do ultrahigh-energy cosmic rays come from? Above a certain energy, cosmic rays don't travel very far before being destroyed. So why are cosmic-ray hunters spotting such rays with no obvious source within our galaxy? What powers quasars? The mightiest energy fountains in the universe probably get their power from matter plunging into whirling supermassive black holes. But the details of what drives their jets remain anybody's guess. What is the nature of black holes? Relativistic mass crammed into a quantum-sized object? It's a recipe for disaster--and scientists are still trying to figure out the ingredients. Why is there more matter than antimatter? To a particle physicist, matter and antimatter are almost the same. Some subtle difference must explain why matter is common and antimatter rare. Does the proton decay? In a theory of everything, quarks (which make up protons) should somehow be convertible to leptons (such as electrons)--so catching a proton decaying into something else might reveal new laws of particle physics. What is the nature of gravity? It clashes with quantum theory. It doesn't fit in the Standard Model. Nobody has spotted the particle that is responsible for it. Newton's apple contain ned a whole can of worms. Why is time different from other dimensions? It took millennia for scientists to realize that time is a dimension, like the three spatial dimensions, and that time and space are inextricably linked. The equations make sense, but they don't satisfy those who ask why we perceive a "now" or why time seems to flow the way it does. Are there smaller building blocks than quarks? Atoms were "uncuttable." Then scientists discovered protons, neutrons, and other subatomic particles--which were, in turn, shown to be made up of quarks and gluons. Is there something more fundamental still? Are neutrinos their own antiparticles? Nobody knows this basic fact about neutrinos, although a number of underground experiments are under way. Answering this question may be a crucial step to understanding the origin of matter in the universe. Is there a unified theory explaining all correlated electron systems? High-temperature superconductors and materials with giant and colossal magnetoresistance are all governed by the collective rather than individual behavior of electrons. There is currently no common framework for understanding them. What is the most powerful laser researchers can build? Theorists say an intense enough laser field would rip photons into electron-positron pairs, dousing the beam. But no one knows whether it's possible to reach that point. Can researchers make a perfect optical lens? They've done it with microwaves but never with visible light. Is it possible to create magnetic semiconductors that work at room temperature? Such devices have been demonstrated at low temperatures but not yet in a range warm enough for spintronics applications. What is the pairing mechanism behind high-temperature superconductivity? Electrons in superconductors surf together in pairs. After 2 decades of intense study, no one knows what holds them together in the complex, high-temperature materials. Can we develop a general theory of the dynamics of turbulent flows and the motion of granular materials? So far, such "nonequilibrium systems" defy the tool kit of statistical mechanics, and the failure leaves a gaping hole in physics. Are there stable high-atomic-number elements? A superheavy element with 184 neutrons and 114 protons should be relatively stable, if physicists can create it. Is superfluidity possible in a solid? If so, how? Despite hints in solid helium, nobody is sure whether a crystalline material can flow without resistance. If new types of experiments show that such outlandish behavior is possible, theorists would have to explain how. What is the structure of water? Researchers continue to tussle over how many bonds each H[2]O molecule makes with its nearest neighbors. What is the nature of the glassy state? Molecules in a glass are arranged much like those in liquids but are more tightly packed. Where and why does liquid end and glass begin? Are there limits to rational chemical synthesis? The larger synthetic molecules get, the harder it is to control their shapes and make enough copies of them to be useful. Chemists will need new tools to keep their creations growing. What is the ultimate efficiency of photovoltaic cells? Conventional solar cells top out at converting 32% of the energy in sunlight to electricity. Can researchers break through the barrier? Will fusion always be the energy source of the future? It's been 35 years away for about 50 years, and unless the international community gets its act together, it'll be 35 years away for many decades to come. What drives the solar magnetic cycle? Scientists believe differing rates of rotation from place to place on the sun underlie its 22-year sunspot cycle. They just can't make it work in their simulations. Either a detail is askew, or it's back to the drawing board. How do planets form? How bits of dust and ice and gobs of gas came together to form the planets without the sun devouring them all is still unclear. Planetary systems around other stars should provide clues. What causes ice ages? Something about the way the planet tilts, wobbles, and careens around the sun presumably brings on ice ages every 100,000 years or so, but reams of climate records haven't explained exactly how. What causes reversals in Earth's magnetic field? Computer models and laboratory experiments are generating new data on how Earth's magnetic poles might flip-flop. The trick will be matching simulations to enough aspects of the magnetic field beyond the inaccessible core to build a convincing case. Are there earthquake precursors that can lead to useful predictions? Prospects for finding signs of an imminent quake have been waning since the 1970s. Understanding faults will progress, but routine prediction would require an as-yet-unimagined breakthrough. Is there--or was there--life elsewhere in the solar system? The search for life--past or present--on other planetary bodies now drives NASA's planetary exploration program, which focuses on Mars, where water abounded when life might have first arisen. What is the origin of homochirality in nature? Most biomolecules can be synthesized in mirror-image shapes. Yet in organisms, amino acids are always left-handed, and sugars are always right-handed. The origins of this preference remain a mystery. Can we predict how proteins will fold? Out of a near infinitude of possible ways to fold, a protein picks one in just tens of microseconds. The same task takes 30 years of computer time. How many proteins are there in humans? It has been hard enough counting genes. Proteins can be spliced in different ways and decorated with numerous functional groups, all of which makes counting their numbers impossible for now. How do proteins find their partners? Protein-protein interactions are at the heart of life. To understand how partners come together in precise orientations in seconds, researchers need to know more about the cell's biochemistry and structural organization. How many forms of cell death are there? In the 1970s, apoptosis was finally recognized as distinct from necrosis. Some biologists now argue that the cell death story is even more complicated. Identifying new ways cells die could lead to better treatments for cancer and degenerative diseases. What keeps intracellular traffic running smoothly? Membranes inside cells transport key nutrients around, and through, various cell compartments without sticking to each other or losing their way. Insights into how membranes stay on track could help conquer diseases, such as cystic fibrosis. What enables cellular components to copy themselves independent of DNA? Centrosomes, which help pull apart paired chromosomes, and other organelles replicate on their own time, without DNA's guidance. This independence still defies explanation. What roles do different forms of RNA play in genome function? RNA is turning out to play a dizzying assortment of roles, from potentially passing genetic information to offspring to muting gene expression. Scientists are scrambling to decipher this versatile molecule. What role do telomeres and centromeres play in genome function? These chromosome features will remain mysteries until new technologies can sequence them. Why are some genomes really big and others quite compact? The puffer fish genome is 400 million bases; one lungfish's is 133 billion bases long. Repetitive and duplicated DNA don't explain why this and other size differences exist. What is all that "junk" doing in our genomes? DNA between genes is proving important for genome function and the evolution of new species. Comparative sequencing, microarray studies, and lab work are helping genomicists find a multitude of genetic gems amid the junk. How much will new technologies lower the cost of sequencing? New tools and conceptual breakthroughs are driving the cost of DNA sequencing down by orders of magnitude. The reductions are enabling research from personalized medicine to evolutionary biology to thrive. How do organs and whole organisms know when to stop growing? A person's right and left legs almost always end up the same length, and the hearts of mice and elephants each fit the proper rib cage. How genes set limits on cell size and number continues to mystify. How can genome changes other than mutations be inherited? Researchers are finding ever more examples of this process, called epigenetics, but they can't explain what causes and preserves the changes. How is asymmetry determined in the embryo? Whirling cilia help an embryo tell its left from its right, but scientists are still looking for the first factors that give a relatively uniform ball of cells a head, tail, front, and back. How do limbs, fins, and faces develop and evolve? The genes that determine the length of a nose or the breadth of a wing are subject to natural and sexual selection. Understanding how selection works could lead to new ideas about the mechanics of evolution with respect to development. What triggers puberty? Nutrition--including that received in utero--seems to help set this mysterious biological clock, but no one knows exactly what forces childhood to end. Are stem cells at the heart of all cancers? The most aggressive cancer cells look a lot like stem cells. If cancers are caused by stem cells gone awry, studies of a cell's "stemness" may lead to tools that could catch tumors sooner and destroy them more effectively. Is cancer susceptible to immune control? Although our immune responses can suppress tumor growth, tumor cells can combat those responses with counter-measures. This defense can stymie researchers hoping to develop immune therapies against cancer. Can cancers be controlled rather than cured? Drugs that cut off a tumor's fuel supplies--say, by stopping blood-vessel growth--can safely check or even reverse tumor growth. But how long the drugs remain effective is still unknown. Is inflammation a major factor in all chronic diseases? It's a driver of arthritis, but cancer and heart disease? More and more, the answer seems to be yes, and the question remains why and how. How do prion diseases work? Even if one accepts that prions are just misfolded proteins, many mysteries remain. How can they go from the gut to the brain, and how do they kill cells once there, for example. How much do vertebrates depend on the innate immune system to fight infection? This system predates the vertebrate adaptive immune response. Its relative importance is unclear, but immunologists are working to find out. Does immunologic memory require chronic exposure to antigens? Yes, say a few prominent thinkers, but experiments with mice now challenge the theory. Putting the debate to rest would require proving that something is not there, so the question likely will not go away. Why doesn't a pregnant woman reject her fetus? Recent evidence suggests that the mother's immune system doesn't "realize" that the fetus is foreign even though it gets half its genes from the father. Yet just as Nobelist Peter Medawar said when he first raised this question in 1952, "the verdict has yet to be returned." What synchronizes an organism's circadian clocks? Circadian clock genes have popped up in all types of creatures and in many parts of the body. Now the challenge is figuring out how all the gears fit together and what keeps the clocks set to the same time. How do migrating organisms find their way? Birds, butterflies, and whales make annual journeys of thousands of kilometers. They rely on cues such as stars and magnetic fields, but the details remain unclear. Why do we sleep? A sound slumber may refresh muscles and organs or keep animals safe from dangers lurking in the dark. But the real secret of sleep probably resides in the brain, which is anything but still while we're snoring away. Why do we dream? Freud thought dreaming provides an outlet for our unconscious desires. Now, neuroscientists suspect that brain activity during REM sleep--when dreams occur--is crucial for learning. Is the experience of dreaming just a side effect? Why are there critical periods for language learning? Monitoring brain activity in young children--including infants--may shed light on why children pick up languages with ease while adults often struggle to learn train station basics in a foreign tongue. Do pheromones influence human behavior? Many animals use airborne chemicals to communicate, particularly when mating. Controversial studies have hinted that humans too use pheromones. Identifying them will be key to assessing their sway on our social lives. How do general anesthetics work? Scientists are chipping away at the drugs' effects on individual neurons, but understanding how they render us unconscious will be a tougher nut to crack. What causes schizophrenia? Researchers are trying to track down genes involved in this disorder. Clues may also come from research on traits schizophrenics share with normal people. What causes autism? Many genes probably contribute to this baffling disorder, as well as unknown environmental factors. A biomarker for early diagnosis would help improve existing therapy, but a cure is a distant hope. To what extent can we stave off Alzheimer's? A 5- to 10-year delay in this late-onset disease would improve old age for millions. Researchers are determining whether treatments with hormones or antioxidants, or mental and physical exercise, will help. What is the biological basis of addiction? Addiction involves the disruption of the brain's reward circuitry. But personality traits such as impulsivity and sensation-seeking also play a part in this complex behavior. Is morality hardwired into the brain? That question has long puzzled philosophers; now some neuroscientists think brain imaging will reveal circuits involved in reasoning. What are the limits of learning by machines? Computers can already beat the world's best chess players, and they have a wealth of information on the Web to draw on. But abstract reasoning is still beyond any machine. How much of personality is genetic? Aspects of personality are influenced by genes; environment modifies the genetic effects. The relative contributions remain under debate. What is the biological root of sexual orientation? Much of the "environmental" contribution to homosexuality may occur before birth in the form of prenatal hormones, so answering this question will require more than just the hunt for "gay genes." Will there ever be a tree of life that systematists can agree on? Despite better morphological, molecular, and statistical methods, researchers' trees don't agree. Expect greater, but not complete, consensus. How many species are there on Earth? Count all the stars in the sky? Impossible. Count all the species on Earth? Ditto. But the biodiversity crisis demands that we try. What is a species? A "simple" concept that's been muddied by evolutionary data; a clear definition may be a long time in coming. Why does lateral transfer occur in so many species and how? Once considered rare, gene swapping, particularly among microbes, is proving quite common. But why and how genes are so mobile--and the effect on fitness--remains to be determined. Who was LUCA (the last universal common ancestor)? Ideas about the origin of the 1.5-billion-year-old "mother" of all complex organisms abound. The continued discovery of primitive microbes, along with comparative genomics, should help resolve life's deep past. How did flowers evolve? Darwin called this question an "abominable mystery." Flowers arose in the cycads and conifers, but the details of their evolution remain obscure. How do plants make cell walls? Cellulose and pectin walls surround cells, keeping water in and supporting tall trees. The biochemistry holds the secrets to turning its biomass into fuel. How is plant growth controlled? Redwoods grow to be hundreds of meters tall, Arctic willows barely 10 centimeters. Understanding the difference could lead to higher-yielding crops. Why aren't all plants immune to all diseases? Plants can mount a general immune response, but they also maintain molecular snipers that take out specific pathogens. Plant pathologists are asking why different species, even closely related ones, have different sets of defenders. The answer could result in hardier crops. What is the basis of variation in stress tolerance in plants? We need crops that better withstand drought, cold, and other stresses. But there are so many genes involved, in complex interactions, that no one has yet figured out which ones work how. What caused mass extinctions? A huge impact did in the dinosaurs, but the search for other catastrophic triggers of extinction has had no luck so far. If more subtle or stealthy culprits are to blame, they will take considerably longer to find. Can we prevent extinction? Finding cost-effective and politically feasible ways to save many endangered species requires creative thinking. Why were some dinosaurs so large? Dinosaurs reached almost unimaginable sizes, some in less than 20 years. But how did the long-necked sauropods, for instance, eat enough to pack on up to 100 tons without denuding their world? How will ecosystems respond to global warming? To anticipate the effects of the intensifying greenhouse, climate modelers will have to focus on regional changes and ecologists on the right combination of environmental changes. How many kinds of humans coexisted in the recent past, and how did they relate? The new dwarf human species fossil from Indonesia suggests that at least four kinds of humans thrived in the past 100,000 years. Better dates and additional material will help confirm or revise this picture. What gave rise to modern human behavior? Did Homo sapiens acquire abstract thought, language, and art gradually or in a cultural "big bang," which in Europe occurred about 40,000 years ago? Data from Africa, where our species arose, may hold the key to the answer. What are the roots of human culture? No animal comes close to having humans' ability to build on previous discoveries and pass the improvements on. What determines those differences could help us understand how human culture evolved. What are the evolutionary roots of language and music? Neuroscientists exploring how we speak and make music are just beginning to find clues as to how these prized abilities arose. What are human races, and how did they develop? Anthropologists have long argued that race lacks biological reality. But our genetic makeup does vary with geographic origin and as such raises political and ethical as well as scientific questions. Why do some countries grow and others stagnate? From Norway to Nigeria, living standards across countries vary enormously, and they're not becoming more equal. What impact do large government deficits have on a country's interest rates and economic growth rate? The United States could provide a test case. Are political and economic freedom closely tied? China may provide one answer. Why has poverty increased and life expectancy declined in sub-Saharan Africa? Almost all efforts to reduce poverty in sub-Saharan Africa have failed. Figuring out what will work is crucial to alleviating massive human suffering. The following six mathematics questions are drawn from a list of seven outstanding problems selected by the Clay Mathematics Institute. (The seventh problem is discussed on p. 96.) For more details, go to www.claymath.org/millennium. Is there a simple test for determining whether an elliptic curve has an infinite number of rational solutions? Equations of the form y^2 = x^3 [plus.gif] ax [plus.gif] b are powerful mathematical tools. The Birch and Swinnerton-Dyer conjecture tells how to determine how many solutions they have in the realm of rational numbers--information that could solve a host of problems, if the conjecture is true. Can a Hodge cycle be written as a sum of algebraic cycles? Two useful mathematical structures arose independently in geometry and in abstract algebra. The Hodge conjecture posits a surprising link between them, but the bridge remains to be built. Will mathematicians unleash the power of the Navier-Stokes equations? First written down in the 1840s, the equations hold the keys to understanding both smooth and turbulent flow. To harness them, though, theorists must find out exactly when they work and under what conditions they break down. Does Poincar?'s test identify spheres in four-dimensional space? You can tie a string around a doughnut, but it will slide right off a sphere. The mathematical principle behind that observation can reliably spot every spherelike object in 3D space. Henri Poincar? conjectured that it should also work in the next dimension up, but no one has proved it yet. Do mathematically interesting zero-value solutions of the Riemann zeta function all have the form a [plus.gif] bi? Don't sweat the details. Since the mid-19th century, the "Riemann hypothesis" has been the monster catfish in mathematicians' pond. If true, it will give them a wealth of information about the distribution of prime numbers and other long-standing mysteries. Does the Standard Model of particle physics rest on solid mathematical foundations? For almost 50 years, the model has rested on "quantum Yang-Mills theory," which links the behavior of particles to structures found in geometry. The theory is breathtakingly elegant and useful--but no one has proved that it's sound. _________________________________________________________________ What Is the Biological Basis of Consciousness? Greg Miller For centuries, debating the nature of consciousness was the exclusive purview of philosophers. But if the recent torrent of books on the topic is any indication, a shift has taken place: Scientists are getting into the game. Has the nature of consciousness finally shifted from a philosophical question to a scientific one that can be solved by doing experiments? The answer, as with any related to this topic, depends on whom you ask. But scientific interest in this slippery, age-old question seems to be gathering momentum. So far, however, although theories abound, hard data are sparse. The discourse on consciousness has been hugely influenced by Ren? Descartes, the French philosopher who in the mid-17th century declared that body and mind are made of different stuff entirely. It must be so, Descartes concluded, because the body exists in both time and space, whereas the mind has no spatial dimension. Recent scientifically oriented accounts of consciousness generally reject Descartes's solution; most prefer to treat body and mind as different aspects of the same thing. In this view, consciousness emerges from the properties and organization of neurons in the brain. But how? And how can scientists, with their devotion to objective observation and measurement, gain access to the inherently private and subjective realm of consciousness? Some insights have come from examining neurological patients whose injuries have altered their consciousness. Damage to certain evolutionarily ancient structures in the brainstem robs people of consciousness entirely, leaving them in a coma or a persistent vegetative state. Although these regions may be a master switch for consciousness, they are unlikely to be its sole source. Different aspects of consciousness are probably generated in different brain regions. Damage to visual areas of the cerebral cortex, for example, can produce strange deficits limited to visual awareness. One extensively studied patient, known as D.F., is unable to identify shapes or determine the orientation of a thin slot in a vertical disk. Yet when asked to pick up a card and slide it through the slot, she does so easily. At some level, D.F. must know the orientation of the slot to be able to do this, but she seems not to know she knows. Cleverly designed experiments can produce similar dissociations of unconscious and conscious knowledge in people without neurological damage. And researchers hope that scanning the brains of subjects engaged in such tasks will reveal clues about the neural activity required for conscious awareness. Work with monkeys also may elucidate some aspects of consciousness, particularly visual awareness. One experimental approach is to present a monkey with an optical illusion that creates a "bistable percept," looking like one thing one moment and another the next. (The orientation-flipping Necker cube is a well-known example.) Monkeys can be trained to indicate which version they perceive. At the same time, researchers hunt for neurons that track the monkey's perception, in hopes that these neurons will lead them to the neural systems involved in conscious visual awareness and ultimately to an explanation of how a particular pattern of photons hitting the retina produces the experience of seeing, say, a rose. Experiments under way at present generally address only pieces of the consciousness puzzle, and very few directly address the most enigmatic aspect of the conscious human mind: the sense of self. Yet the experimental work has begun, and if the results don't provide a blinding insight into how consciousness arises from tangles of neurons, they should at least refine the next round of questions. Ultimately, scientists would like to understand not just the biological basis of consciousness but also why it exists. What selection pressure led to its development, and how many of our fellow creatures share it? Some researchers suspect that consciousness is not unique to humans, but of course much depends on how the term is defined. Biological markers for consciousness might help settle the matter and shed light on how consciousness develops early in life. Such markers could also inform medical decisions about loved ones who are in an unresponsive state. Until fairly recently, tackling the subject of consciousness was a dubious career move for any scientist without tenure (and perhaps a Nobel Prize already in the bag). Fortunately, more young researchers are now joining the fray. The unanswered questions should keep them--and the printing presses--busy for many years to come. _________________________________________________________________ Why Do Humans Have So Few Genes? Elizabeth Pennisi When leading biologists were unraveling the sequence of the human genome in the late 1990s, they ran a pool on the number of genes contained in the 3 billion base pairs that make up our DNA. Few bets came close. The conventional wisdom a decade or so ago was that we need about 100,000 genes to carry out the myriad cellular processes that keep us functioning. But it turns out that we have only about 25,000 genes--about the same number as a tiny flowering plant called Arabidopsis and barely more than the worm Caenorhabditis elegans. That big surprise reinforced a growing realization among geneticists: Our genomes and those of other mammals are far more flexible and complicated than they once seemed. The old notion of one gene/one protein has gone by the board: It is now clear that many genes can make more than one protein. Regulatory proteins, RNA, noncoding bits of DNA, even chemical and structural alterations of the genome itself control how, where, and when genes are expressed. Figuring out how all these elements work together to choreograph gene expression is one of the central challenges facing biologists. In the past few years, it has become clear that a phenomenon called alternative splicing is one reason human genomes can produce such complexity with so few genes. Human genes contain both coding DNA--exons--and noncoding DNA. In some genes, different combinations of exons can become active at different times, and each combination yields a different protein. Alternative splicing was long considered a rare hiccup during transcription, but researchers have concluded that it may occur in half--some say close to all--of our genes. That finding goes a long way toward explaining how so few genes can produce hundreds of thousands of different proteins. But how the transcription machinery decides which parts of a gene to read at any particular time is still largely a mystery. The same could be said for the mechanisms that determine which genes or suites of genes are turned on or off at particular times and places. Researchers are discovering that each gene needs a supporting cast of hundreds to get its job done. They include proteins that shut down or activate a gene, for example by adding acetyl or methyl groups to the DNA. Other proteins, called transcription factors, interact with the genes more directly: They bind to landing sites situated near the gene under their control. As with alternative splicing, activation of different combinations of landing sites makes possible exquisite control of gene expression, but researchers have yet to figure out exactly how all these regulatory elements really work or how they fit in with alternative splicing. Figure 1 Approximate number of genes In the past decade or so, researchers have also come to appreciate the key roles played by chromatin proteins and RNA in regulating gene expression. Chromatin proteins are essentially the packaging for DNA, holding chromosomes in well-defined spirals. By slightly changing shape, chromatin may expose different genes to the transcription machinery. Genes also dance to the tune of RNA. Small RNA molecules, many less than 30 bases, now share the limelight with other gene regulators. Many researchers who once focused on messenger RNA and other relatively large RNA molecules have in the past 5 years turned their attention to these smaller cousins, including microRNA and small nuclear RNA. Surprisingly, RNAs in these various guises shut down and otherwise alter gene expression. They also are key to cell differentiation in developing organisms, but the mechanisms are not fully understood. Researchers have made enormous strides in pinpointing these various mechanisms. By matching up genomes from organisms on different branches on the evolutionary tree, genomicists are locating regulatory regions and gaining insights into how mechanisms such as alternative splicing evolved. These studies, in turn, should shed light on how these regions work. Experiments in mice, such as the addition or deletion of regulatory regions and manipulating RNA, and computer models should also help. But the central question is likely to remain unsolved for a long time: How do all these features meld together to make us whole? _________________________________________________________________ To What Extent Are Genetic Variation and Personal Health Linked? Jennifer Couzin Forty years ago, doctors learned why some patients who received the anesthetic succinylcholine awoke normally but remained temporarily paralyzed and unable to breathe: They shared an inherited quirk that slowed their metabolism of the drug. Later, scientists traced sluggish succinylcholine metabolism to a particular gene variant. Roughly 1 in 3500 people carry two deleterious copies, putting them at high risk of this distressing side effect. The solution to the succinylcholine mystery was among the first links drawn between genetic variation and an individual's response to drugs. Since then, a small but growing number of differences in drug metabolism have been linked to genetics, helping explain why some patients benefit from a particular drug, some gain nothing, and others suffer toxic side effects. The same sort of variation, it is now clear, plays a key role in individual risks of coming down with a variety of diseases. Gene variants have been linked to elevated risks for disorders from Alzheimer's disease to breast cancer, and they may help explain why, for example, some smokers develop lung cancer whereas many others don't. These developments have led to hopes--and some hype--that we are on the verge of an era of personalized medicine, one in which genetic tests will determine disease risks and guide prevention strategies and therapies. But digging up the DNA responsible--if in fact DNA is responsible--and converting that knowledge into gene tests that doctors can use remains a formidable challenge. Many conditions, including various cancers, heart attacks, lupus, and depression, likely arise when a particular mix of genes collides with something in the environment, such as nicotine or a fatty diet. These multigene interactions are subtler and knottier than the single gene drivers of diseases such as hemophilia and cystic fibrosis; spotting them calls for statistical inspiration and rigorous experiments repeated again and again to guard against introducing unproven gene tests into the clinic. And determining treatment strategies will be no less complex: Last summer, for example, a team of scientists linked 124 different genes to resistance to four leukemia drugs. But identifying gene networks like these is only the beginning. One of the toughest tasks is replicating these studies--an especially difficult proposition in diseases that are not overwhelmingly heritable, such as asthma, or ones that affect fairly small patient cohorts, such as certain childhood cancers. Many clinical trials do not routinely collect DNA from volunteers, making it sometimes difficult for scientists to correlate disease or drug response with genes. Gene microarrays, which measure expression of dozens of genes at once, can be fickle and supply inconsistent results. Gene studies can also be prohibitively costly. Nonetheless, genetic dissection of some diseases--such as cancer, asthma, and heart disease--is galloping ahead. Progress in other areas, such as psychiatric disorders, is slower. Severely depressed or schizophrenic patients could benefit enormously from tests that reveal which drug and dose will help them the most, but unlike asthma, drug response can be difficult to quantify biologically, making gene-drug relations tougher to pin down. As DNA sequence becomes more available and technologies improve, the genetic patterns that govern health will likely come into sharper relief. Genetic tools still under construction, such as a haplotype map that will be used to discern genetic variation behind common diseases, could further accelerate the search for disease genes. The next step will be designing DNA tests to guide clinical decision-making--and using them. If history is any guide, integrating such tests into standard practice will take time. In emergencies--a heart attack, an acute cancer, or an asthma attack--such tests will be valuable only if they rapidly deliver results. Ultimately, comprehensive personalized medicine will come only if pharmaceutical companies want it to--and it will take enormous investments in research and development. Many companies worry that testing for genetic differences will narrow their market and squelch their profits. Still, researchers continue to identify new opportunities. In May, the Icelandic company deCODE Genetics reported that an experimental asthma drug that pharmaceutical giant Bayer had abandoned appeared to decrease the risk of heart attack in more than 170 patients who carried particular gene variants. The drug targets the protein produced by one of those genes. The finding is likely to be just a foretaste of the many surprises in store, as the braids binding DNA, drugs, and disease are slowly unwound. _________________________________________________________________ Can the Laws of Physics Be Unified? Charles Seife At its best, physics eliminates complexity by revealing underlying simplicity. Maxwell's equations, for example, describe all the confusing and diverse phenomena of classical electricity and magnetism by means of four simple rules. These equations are beautiful; they have an eerie symmetry, mirroring one another in an intricate dance of symbols. The four together feel as elegant, as whole, and as complete to a physicist as a Shakespearean sonnet does to a poet. The Standard Model of particle physics is an unfinished poem. Most of the pieces are there, and even unfinished, it is arguably the most brilliant opus in the literature of physics. With great precision, it describes all known matter--all the subatomic particles such as quarks and leptons--as well as the forces by which those particles interact with one another. These forces are electromagnetism, which describes how charged objects feel each other's influence: the weak force, which explains how particles can change their identities, and the strong force, which describes how quarks stick together to form protons and other composite particles. But as lovely as the Standard Model's description is, it is in pieces, and some of those pieces--those that describe gravity--are missing. It is a few shards of beauty that hint at something greater, like a few lines of Sappho on a fragment of papyrus. The beauty of the Standard Model is in its symmetry; mathematicians describe its symmetries with objects known as Lie groups. And a mere glimpse at the Standard Model's Lie group betrays its fragmented nature: SU(3) [mult.gif] SU(2) [mult.gif] U(1). Each of those pieces represents one type of symmetry, but the symmetry of the whole is broken. Each of the forces behaves in a slightly different way, so each is described with a slightly different symmetry. But those differences might be superficial. Electromagnetism and the weak force appear very dissimilar, but in the 1960s physicists showed that at high temperatures, the two forces "unify." It becomes apparent that electromagnetism and the weak force are really the same thing, just as it becomes obvious that ice and liquid water are the same substance if you warm them up together. This connection led physicists to hope that the strong force could also be unified with the other two forces, yielding one large theory described by a single symmetry such as SU(5). A unified theory should have observable consequences. For example, if the strong force truly is the same as the electroweak force, then protons might not be truly stable; once in a long while, they should decay spontaneously. Despite many searches, nobody has spotted a proton decay, nor has anyone sighted any particles predicted by some symmetry-enhancing modifications to the Standard Model, such as supersymmetry. Worse yet, even such a unified theory can't be complete--as long as it ignores gravity. Figure 1 Fundamental forces. A theory that ties all four forces together is still lacking. Gravity is a troublesome force. The theory that describes it, general relativity, assumes that space and time are smooth and continuous, whereas the underlying quantum physics that governs subatomic particles and forces is inherently discontinuous and jumpy. Gravity clashes with quantum theory so badly that nobody has come up with a convincing way to build a single theory that includes all the particles, the strong and electroweak forces, and gravity all in one big bundle. But physicists do have some leads. Perhaps the most promising is superstring theory. Superstring theory has a large following because it provides a way to unify everything into one large theory with a single symmetry--SO(32) for one branch of superstring theory, for example--but it requires a universe with 10 or 11 dimensions, scads of undetected particles, and a lot of intellectual baggage that might never be verifiable. It may be that there are dozens of unified theories, only one of which is correct, but scientists may never have the means to determine which. Or it may be that the struggle to unify all the forces and particles is a fool's quest. In the meantime, physicists will continue to look for proton decays, as well as search for supersymmetric particles in underground traps and in the Large Hadron Collider (LHC) in Geneva, Switzerland, when it comes online in 2007. Scientists believe that LHC will also reveal the existence of the Higgs boson, a particle intimately related to fundamental symmetries in the model of particle physics. And physicists hope that one day, they will be able to finish the unfinished poem and frame its fearful symmetry. _________________________________________________________________ How Much Can Human Life Span Be Extended? Jennifer Couzin When Jeanne Calment died in a nursing home in southern France in 1997, she was 122 years old, the longest-living human ever documented. But Calment's uncommon status will fade in subsequent decades if the predictions of some biologists and demographers come true. Life-span extension in species from yeast to mice and extrapolation from life expectancy trends in humans have convinced a swath of scientists that humans will routinely coast beyond 100 or 110 years of age. (Today, 1 in 10,000 people in industrialized countries hold centenarian status.) Others say human life span may be far more limited. The elasticity found in other species might not apply to us. Furthermore, testing life-extension treatments in humans may be nearly impossible for practical and ethical reasons. Just 2 or 3 decades ago, research on aging was a backwater. But when molecular biologists began hunting for ways to prolong life, they found that life span was remarkably pliable. Reducing the activity of an insulinlike receptor more than doubles the life span of worms to a startling--for them--6 weeks. Put certain strains of mice on near-starvation but nutrient-rich diets, and they live 50% longer than normal. Some of these effects may not occur in other species. A worm's ability to enter a "dauer" state, which resembles hibernation, may be critical, for example. And shorter-lived species such as worms and fruit flies, whose aging has been delayed the most, may be more susceptible to life-span manipulation. But successful approaches are converging on a few key areas: calorie restriction; reducing levels of insulinlike growth factor 1 (IGF-1), a protein; and preventing oxidative damage to the body's tissues. All three might be interconnected, but so far that hasn't been confirmed (although calorie-restricted animals have low levels of IGF-1). Can these strategies help humans live longer? And how do we determine whether they will? Unlike drugs for cancer or heart disease, the benefits of antiaging treatments are fuzzier, making studies difficult to set up and to interpret. Safety is uncertain; calorie restriction reduces fertility in animals, and lab flies bred to live long can't compete with their wild counterparts. Furthermore, garnering results--particularly from younger volunteers, who may be likeliest to benefit because they've aged the least--will take so long that by the time results are in, those who began the study will be dead. That hasn't stopped scientists, some of whom have founded companies, from searching for treatments to slow aging. One intriguing question is whether calorie restriction works in humans. It's being tested in primates, and the National Institute on Aging in Bethesda, Maryland, is funding short-term studies in people. Volunteers in those trials have been on a stringent diet for up to 1 year while researchers monitor their metabolism and other factors that could hint at how they're aging. Insights could also come from genetic studies of centenarians, who may have inherited long life from their parents. Many scientists believe that average human life span has an inherent upper limit, although they don't agree on whether it's 85 or 100 or 150. One abiding question in the antiaging world is what the goal of all this work ought to be. Overwhelmingly, scientists favor treatments that will slow aging and stave off age-related diseases rather than simply extending life at its most decrepit. But even so, slowing aging could have profound social effects, upsetting actuarial tables and retirement plans. Then there's the issue of fairness: If antiaging therapies become available, who will receive them? How much will they cost? Individuals may find they can stretch their life spans. But that may be tougher to achieve for whole populations, although many demographers believe that the average life span will continue to climb as it has consistently for decades. If that happens, much of the increase may come from less dramatic strategies, such as heart disease and cancer prevention, that could also make the end of a long life more bearable. _________________________________________________________________ What Controls Organ Regeneration? R. John Davenport* Unlike automobiles, humans get along pretty well for most of their lives with their original parts. But organs do sometimes fail, and we can't go to the mechanic for an engine rebuild or a new water pump--at least not yet. Medicine has battled back many of the acute threats, such as infection, that curtailed human life in past centuries. Now, chronic illnesses and deteriorating organs pose the biggest drain on human health in industrialized nations, and they will only increase in importance as the population ages. Regenerative medicine--rebuilding organs and tissues--could conceivably be the 21st century equivalent of antibiotics in the 20th. Before that can happen, researchers must understand the signals that control regeneration. Researchers have puzzled for centuries over how body parts replenish themselves. In the mid-1700s, for instance, Swiss researcher Abraham Trembley noted that when chopped into pieces, hydra--tubelike creatures with tentacles that live in fresh water--could grow back into complete, new organisms. Other scientists of the era examined the salamander's ability to replace a severed tail. And a century later, Thomas Hunt Morgan scrutinized planaria, flatworms that can regenerate even when whittled into 279 bits. But he decided that regeneration was an intractable problem and forsook planaria in favor of fruit flies. Mainstream biology has followed in Morgan's wake, focusing on animals suitable for studying genetic and embryonic development. But some researchers have pressed on with studies of regeneration superstars, and they've devised innovative strategies to tackle the genetics of these organisms. These efforts and investigations of new regeneration models--such as zebrafish and special mouse lines--are beginning to reveal the forces that guide regeneration and those that prevent it. Animals exploit three principal strategies to regenerate organs. First, working organ cells that normally don't divide can multiply and grow to replenish lost tissue, as occurs in injured salamander hearts. Second, specialized cells can undo their training--a process known as dedifferentiation--and assume a more pliable form that can replicate and later respecialize to reconstruct a missing part. Salamanders and newts take this approach to heal and rebuild a severed limb, as do zebrafish to mend clipped fins. Finally, pools of stem cells can step in to perform required renovations. Planaria tap into this resource when reconstructing themselves. Figure 1 Self-repair. Newts reprogram their cells to reconstruct a severed limb. Humans already plug into these mechanisms to some degree. For instance, after surgical removal of part of a liver, healing signals tell remaining liver cells to resume growth and division to expand the organ back to its original size. Researchers have found that when properly enticed, some types of specialized human cells can revert to a more nascent state (see p. [38]85). And stem cells help replenish our blood, skin, and bones. So why do our hearts fill with scar tissue, our lenses cloud, and our brain cells perish? Animals such as salamanders and planaria regenerate tissues by rekindling genetic mechanisms that guide the patterning of body structures during embryonic development. We employ similar pathways to shape our parts as embryos, but over the course of evolution, humans may have lost the ability to tap into it as adults, perhaps because the cell division required for regeneration elevated the likelihood of cancer. And we may have evolved the capacity to heal wounds rapidly to repel infection, even though speeding the pace means more scarring. Regeneration pros such as salamanders heal wounds methodically and produce pristine tissue. Avoiding fibrotic tissue could mean the difference between regenerating and not: Mouse nerves grow vigorously if experimentally severed in a way that prevents scarring, but if a scar forms, nerves wither. Unraveling the mysteries of regeneration will depend on understanding what separates our wound-healing process from that of animals that are able to regenerate. The difference might be subtle: Researchers have identified one strain of mice that seals up ear holes in weeks, whereas typical strains never do. A relatively modest number of genetic differences seems to underlie the effect. Perhaps altering a handful of genes would be enough to turn us into superhealers, too. But if scientists succeed in initiating the process in humans, new questions will emerge. What keeps regenerating cells from running amok? And what ensures that regenerated parts are the right size and shape, and in the right place and orientation? If researchers can solve these riddles--and it's a big "if"--people might be able to order up replacement parts for themselves, not just their '67 Mustangs. R. John Davenport is an editor of Science's SAGE KE. _________________________________________________________________ How Can a Skin Cell Become a Nerve Cell? Gretchen Vogel Like Medieval alchemists who searched for an elixir that could turn base metals into gold, biology's modern alchemists have learned how to use oocytes to turn normal skin cells into valuable stem cells, and even whole animals. Scientists, with practice, have now been able to make nuclear transfer nearly routine to produce cattle, cats, mice, sheep, goats, pigs, and--as a Korean team announced in May--even human embryonic stem (ES) cells. They hope to go still further and turn the stem cells into treatments for previously untreatable diseases. But like the medieval alchemists, today's cloning and stem cell biologists are working largely with processes they don't fully understand: What actually happens inside the oocyte to reprogram the nucleus is still a mystery, and scientists have a lot to learn before they can direct a cell's differentiation as smoothly as nature's program of development does every time fertilized egg gives rise to the multiple cell types that make up a live baby. Scientists have been investigating the reprogramming powers of the oocyte for half a century. In 1957, developmental biologists first discovered that they could insert the nucleus of adult frog cells into frog eggs and create dozens of genetically identical tadpoles. But in 50 years, the oocyte has yet to give up its secrets. The answers lie deep in cell biology. Somehow, scientists know, the genes that control development--generally turned off in adult cells--get turned back on again by the oocyte, enabling the cell to take on the youthful potential of a newly fertilized egg. Scientists understand relatively little about these on-and-off switches in normal cells, however, let alone the unusual reversal that takes place during nuclear transfer. Figure 1 Cellular alchemist. A human oocyte. As cells differentiate, their DNA becomes more tightly packed, and genes that are no longer needed--or those which should not be expressed--are blocked. The DNA wraps tightly around proteins called histones, and genes are then tagged with methyl groups that prevent the proteinmaking machinery in the cell from reaching them. Several studies have shown that enzymes that remove those methyl groups are crucial for nuclear transfer to work. But they are far from the only things that are needed. If scientists could uncover the oocyte's secrets, it might be possible to replicate its tricks without using oocytes themselves, a resource that is fairly difficult to obtain and the use of which raises numerous ethical questions. If scientists could come up with a cell-free bath that turned the clock back on already-differentiated cells, the implications could be enormous. Labs could rejuvenate cells from patients and perhaps then grow them into new tissue that could repair parts worn out by old age or disease. But scientists are far from sure if such cell-free alchemy is possible. The egg's very structure, its scaffolding of proteins that guide the chromosomes during cell division, may also play a key role in turning on the necessary genes. If so, developing an elixir of proteins that can turn back a cell's clock may remain elusive. To really make use of the oocyte's power, scientists still need to learn how to direct the development of the rejuvenated stem cells and guide them into forming specific tissues. Stem cells, especially those from embryos, spontaneously form dozens of cell types, but controlling that development to produce a single type of cell has proved more difficult. Although some teams have managed to produce nearly pure colonies of certain kinds of neural cells from ES cells, no one has managed to concoct a recipe that will direct the cells to become, say, a pure population of dopamine-producing neurons that could replace those missing in Parkinson's disease. Scientists are just beginning to understand how cues interact to guide a cell toward its final destiny. Decades of work in developmental biology have provided a start: Biologists have used mutant frogs, flies, mice, chicks, and fish to identify some of the main genes that control a developing cell's decision to become a bone cell or a muscle cell. But observing what goes wrong when a gene is missing is easier than learning to orchestrate differentiation in a culture dish. Understanding how the roughly 25,000 human genes work together to form tissues--and tweaking the right ones to guide an immature cell's development--will keep researchers occupied for decades. If they succeed, however, the result will be worth far more than its weight in gold. _________________________________________________________________ How Does a Single Somatic Cell Become a Whole Plant? Gretchen Vogel It takes a certain amount of flexibility for a plant to survive and reproduce. It can stretch its roots toward water and its leaves toward sunlight, but it has few options for escaping predators or finding mates. To compensate, many plants have evolved repair mechanisms and reproductive strategies that allow them to produce offspring even without the meeting of sperm and egg. Some can reproduce from outgrowths of stems, roots, and bulbs, but others are even more radical, able to create new embryos from single somatic cells. Most citrus trees, for example, can form embryos from the tissues surrounding the unfertilized gametes--a feat no animal can manage. The house-plant Bryophyllum can sprout embryos from the edges of its leaves, a bit like Athena springing from Zeus's head. Nearly 50 years ago, scientists learned that they could coax carrot cells to undergo such embryogenesis in the lab. Since then, people have used so-called somatic embryogenesis to propagate dozens of species, including coffee, magnolias, mangos, and roses. A Canadian company has planted entire forests of fir trees that started life in tissue culture. But like researchers who clone animals (see p. [37]85), plant scientists understand little about what actually controls the process. The search for answers might shed light on how cells' fates become fixed during development, and how plants manage to retain such flexibility. Scientists aren't even sure which cells are capable of embryogenesis. Although earlier work assumed that all plant cells were equally labile, recent evidence suggests that only a subset of cells can transform into embryos. But what those cells look like before their transformation is a mystery. Researchers have videotaped cultures in which embryos develop but found no visual pattern that hints at which cells are about to sprout, and staining for certain patterns of gene expression has been inconclusive. Figure 1 Power of one. Orange tree embryos can sprout from a single somatic cell. Researchers do have a few clues about the molecules that might be involved. In the lab, the herbicide 2,4-dichlorophenoxyacetic acid (sold as weed killer and called 2,4-D) can prompt cells in culture to elongate, build a new cell wall, and start dividing to form embryos. The herbicide is a synthetic analog of the plant hormones called auxins, which control everything from the plant's response to light and gravity to the ripening of fruit. Auxins might also be important in natural somatic embryogenesis: Embryos that sprout on top of veins near the leaf edge are exposed to relatively high levels of auxins. Recent work has also shown that over- or underexpression of certain genes in Arabidopsis plants can prompt embryogenesis in otherwise normal-looking leaf cells. Sorting out sex-free embryogenesis might help scientists understand the cellular switches that plants use to stay flexible while still keeping growth under control. Developmental biologists are keen to learn how those mechanisms compare in plants and animals. Indeed, some of the processes that control somatic embryogenesis may be similar to those that occur during animal cloning or limb regeneration (see p. 84). On a practical level, scientists would like to be able to use lab-propagation techniques on crop plants such as maize that still require normal pollination. That would speed up both breeding of new varieties and the production of hybrid seedlings--a flexibility that farmers and consumers could both appreciate. _________________________________________________________________ How Does Earth's Interior Work? Richard A. Kerr The plate tectonics revolution went only so deep. True, it made wonderful sense of most of the planet's geology. But that's something like understanding the face of Big Ben; there must be a lot more inside to understand about how and why it all works. In the case of Earth, there's another 6300 kilometers of rock and iron beneath the tectonic plates whose churnings constitute the inner workings of a planetary heat engine. Tectonic plates jostling about the surface are like the hands sweeping across the clock face: informative in many ways but largely mute as to what drives them. Figure 1 A long way to go. Grasping the workings of plate tectonics will require deeper probing. Earth scientists inherited a rather simple picture of Earth's interior from their pre-plate tectonics colleagues. Earth was like an onion. Seismic waves passing through the deep Earth suggested that beneath the broken skin of plates lies a 2800-kilometer layer of rocky mantle overlying 3470 kilometers of molten and--at the center--solid iron. The mantle was further subdivided at a depth of 670 kilometers into upper and lower layers, with a hint of a layer a couple of hundred kilometers thick at the bottom of the lower mantle. In the postrevolution era, the onion model continued to loom large. The dominant picture of Earth's inner workings divided the planet at the 670-kilometer depth, forming with the core a three-layer machine. Above the 670, the mantle churned slowly like a very shallow pot of boiling water, delivering heat and rock at mid-ocean ridges to make new crust and cool the interior and accepting cold sinking slabs of old plate at deep-sea trenches. A plume of hot rock might rise from just above the 670 to form a volcanic hot spot like Hawaii. But no hot rock rose up through the 670 barrier, and no cold rock sank down through it. Alternatively, argued a smaller contingent, the mantle churned from bottom to top like a deep stockpot, with plumes rising all the way from the core-mantle boundary. Forty years of probing inner Earth with ever more sophisticated seismic imaging has boosted the view of the engine's complexity without much calming the debate about how it works. Imaging now clearly shows that the 670 is no absolute barrier. Slabs penetrate the boundary, although with difficulty. Layered-earth advocates have duly dropped their impenetrable boundary to 1000 kilometers or deeper. Or maybe there's a flexible, semipermeable boundary somewhere that limits mixing to only the most insistent slabs or plumes. Now seismic imaging is also outlining two great globs of mantle rock standing beneath Africa and the Pacific like pistons. Researchers disagree whether they are hotter than average and rising under their own buoyancy, denser and sinking, or merely passively being carried upward by adjacent currents. Thin lenses of partially melted rock dot the mantle bottom, perhaps marking the bottom of plumes, or perhaps not. Geochemists reading the entrails of elements and isotopes in mantle-derived rocks find signs of five long-lived "reservoirs" that must have resisted mixing in the mantle for billions of years. But they haven't a clue where in the depths of the mantle those reservoirs might be hiding. How can we disassemble the increasingly complex planetary machine and find what makes it tick? With more of the same, plus a large dose of patience. After all, plate tectonics was more than a half-century in the making, and those revolutionaries had to look little deeper than the sea floor. Seismic imaging will continue to improve as better seismometers are spread more evenly about the globe. Seismic data are already distinguishing between temperature and compositional effects, painting an even more complex picture of mantle structure. Mineral physicists working in the lab will tease out more properties of rock under deep mantle conditions to inform interpretation of the seismic data, although still handicapped by the uncertain details of mantle composition. And modelers will more faithfully simulate the whole machine, drawing on seismics, mineral physics, and subtle geophysical observations such as gravity variations. Another 40 years should do it. _________________________________________________________________ Are We Alone in the Universe? Richard A. Kerr Alone, in all that space? Not likely. Just do the numbers: Several hundred billion stars in our galaxy, hundreds of billions of galaxies in the observable universe, and 150 planets spied already in the immediate neighborhood of the sun. That should make for plenty of warm, scummy little ponds where life could come together to begin billions of years of evolution toward technology-wielding creatures like ourselves. No, the really big question is when, if ever, we'll have the technological wherewithal to reach out and touch such intelligence. With a bit of luck, it could be in the next 25 years. Workers in the search for extraterrestrial intelligence (SETI) would have needed more than a little luck in the first 45 years of the modern hunt for like-minded colleagues out there. Radio astronomer Frank Drake's landmark Project Ozma was certainly a triumph of hope over daunting odds. In 1960, Drake pointed a 26-meter radio telescope dish in Green Bank, West Virginia, at two stars for a few days each. Given the vacuum-tube technology of the time, he could scan across 0.4 megahertz of the microwave spectrum one channel at a time. Almost 45 years later, the SETI Institute in Mountain View, California, completed its 10-year-long Project Phoenix. Often using the 350-meter antenna at Arecibo, Puerto Rico, Phoenix researchers searched 710 star systems at 28 million channels simultaneously across an 1800-megahertz range. All in all, the Phoenix search was 100 trillion times more effective than Ozma was. Besides stunning advances in search power, the first 45 years of modern SETI have also seen a diversification of search strategies. The Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations (SERENDIP) has scanned billions of radio sources in the Milky Way by piggybacking receivers on antennas in use by observational astronomers, including Arecibo. And other groups are turning modest-sized optical telescopes to searching for nanosecond flashes from alien lasers. Figure 1 Listening for E.T. The SETI Institute is deploying an array of antennas and tying them into a giant "virtual telescope." Still, nothing has been heard. But then, Phoenix, for example, scanned just one or two nearby sunlike stars out of each 100 million stars out there. For such sparse sampling to work, advanced, broadcasting civilizations would have to be abundant, or searchers would have to get very lucky. To find the needle in a galaxy-size haystack, SETI workers are counting on the consistently exponential growth of computing power to continue for another couple of decades. In northern California, the SETI Institute has already begun constructing an array composed of individual 6-meter antennas. Ever-cheaper computer power will eventually tie 350 such antennas into "virtual telescopes," allowing scientists to search many targets at once. If Moore's law--that the cost of computation halves every 18 months--holds for another 15 years or so, SETI workers plan to use this antenna array approach to check out not a few thousand but perhaps a few million or even tens of millions of stars for alien signals. If there were just 10,000 advanced civilizations in the galaxy, they could well strike pay dirt before Science turns 150. The technology may well be available in coming decades, but SETI will also need money. That's no easy task in a field with as high a "giggle factor" as SETI has. The U.S. Congress forced NASA to wash its hands of SETI in 1993 after some congressmen mocked the whole idea of spending federal money to look for "little green men with misshapen heads," as one of them put it. Searching for another tippy-top branch of the evolutionary tree still isn't part of the NASA vision. For more than a decade, private funding alone has driven SETI. But the SETI Institute's planned $35 million array is only a prototype of the Square Kilometer Array that would put those tens of millions of stars within reach of SETI workers. For that, mainstream radio astronomers will have to be onboard--or we'll be feeling alone in the universe a long time indeed. _________________________________________________________________ How and Where Did Life on Earth Arise? Carl Zimmer* For the past 50 years, scientists have attacked the question of how life began in a pincer movement. Some approach it from the present, moving backward in time from life today to its simpler ancestors. Others march forward from the formation of Earth 4.55 billion years ago, exploring how lifeless chemicals might have become organized into living matter. Working backward, paleontologists have found fossils of microbes dating back at least 3.4 billion years. Chemical analysis of even older rocks suggests that photosynthetic organisms were already well established on Earth by 3.7 billion years ago. Researchers suspect that the organisms that left these traces shared the same basic traits found in all life today. All free-living organisms encode genetic information in DNA and catalyze chemical reactions using proteins. Because DNA and proteins depend so intimately on each other for their survival, it's hard to imagine one of them having evolved first. But it's just as implausible for them to have emerged simultaneously out of a prebiotic soup. Experiments now suggest that earlier forms of life could have been based on a third kind of molecule found in today's organisms: RNA. Once considered nothing more than a cellular courier, RNA turns out to be astonishingly versatile, not only encoding genetic information but also acting like a protein. Some RNA molecules switch genes on and off, for example, whereas others bind to proteins and other molecules. Laboratory experiments suggest that RNA could have replicated itself and carried out the other functions required to keep a primitive cell alive. Only after life passed through this "RNA world," many scientists now agree, did it take on a more familiar cast. Proteins are thousands of times more efficient as a catalyst than RNA is, and so once they emerged they would have been favored by natural selection. Likewise, genetic information can be replicated from DNA with far fewer errors than it can from RNA. Other scientists have focused their efforts on figuring out how the lifeless chemistry of a prebiotic Earth could have given rise to an RNA world. In 1953, working at the University of Chicago, Stanley Miller and Harold Urey demonstrated that experiments could shed light on this question. They ran an electric current through a mix of ammonia, methane, and other gases believed at the time to have been present on early Earth. They found that they could produce amino acids and other important building blocks of life. Figure 1 Cauldron of life? Deep-sea vents are one proposed site for life's start. Today, many scientists argue that the early atmosphere was dominated by other gases, such as carbon dioxide. But experiments in recent years have shown that under these conditions, many building blocks of life can be formed. In addition, comets and meteorites may have delivered organic compounds from space. Just where on Earth these building blocks came together as primitive life forms is a subject of debate. Starting in the 1980s, many scientists argued that life got its start in the scalding, mineral-rich waters streaming out of deep-sea hydrothermal vents. Evidence for a hot start included studies on the tree of life, which suggested that the most primitive species of microbes alive today thrive in hot water. But the hot-start hypothesis has cooled off a bit. Recent studies suggest that heat-loving microbes are not living fossils. Instead, they may have descended from less hardy species and evolved new defenses against heat. Some skeptics also wonder how delicate RNA molecules could have survived in boiling water. No single strong hypothesis has taken the hot start's place, however, although suggestions include tidal pools or oceans covered by glaciers. Research projects now under way may shed more light on how life began. Scientists are running experiments in which RNA-based cells may be able to reproduce and evolve. NASA and the European Space Agency have launched probes that will visit comets, narrowing down the possible ingredients that might have been showered on early Earth. Most exciting of all is the possibility of finding signs of life on Mars. Recent missions to Mars have provided strong evidence that shallow seas of liquid water once existed on the Red Planet--suggesting that Mars might once have been hospitable to life. Future Mars missions will look for signs of life hiding in under-ground refuges, or fossils of extinct creatures. If life does turn up, the discovery could mean that life arose independently on both planets--suggesting that it is common in the universe--or that it arose on one planet and spread to the other. Perhaps martian microbes were carried to Earth on a meteorite 4 billion years ago, infecting our sterile planet. __________________________________________ Carl Zimmer is the author of Soul Made Flesh: The Discovery of the Brain--and How it Changed the World. _________________________________________________________________ What Determines Species Diversity? Elizabeth Pennisi Countless species of plants, animals, and microbes fill every crack and crevice on land and in the sea. They make the world go 'round, converting sunlight to energy that fuels the rest of life, cycling carbon and nitrogen between inorganic and organic forms, and modifying the landscape. In some places and some groups, hundreds of species exist, whereas in others, very few have evolved; the tropics, for example, are a complex paradise compared to higher latitudes. Biologists are striving to understand why. The interplay between environment and living organisms and between the organisms themselves play key roles in encouraging or discouraging diversity, as do human disturbances, predator-prey relationships, and other food web connections. But exactly how these and other forces work together to shape diversity is largely a mystery. The challenge is daunting. Baseline data are poor, for example: We don't yet know how many plant and animal species there are on Earth, and researchers can't even begin to predict the numbers and kinds of organisms that make up the microbial world. Researchers probing the evolution of, and limits to, diversity also lack a standardized time scale because evolution takes place over periods lasting from days to millions of years. Moreover, there can be almost as much variation within a species as between two closely related ones. Nor is it clear what genetic changes will result in a new species and what their true influence on speciation is. Understanding what shapes diversity will require a major interdisciplinary effort, involving paleontological interpretation, field studies, laboratory experimentation, genomic comparisons, and effective statistical analyses. A few exhaustive inventories, such as the United Nations' Millennium Project and an around-the-world assessment of genes from marine microbes, should improve baseline data, but they will barely scratch the surface. Models that predict when one species will split into two will help. And an emerging discipline called evo-devo is probing how genes involved in development contribute to evolution. Together, these efforts will go a long way toward clarifying the history of life. Paleontologists have already made headway in tracking the expansion and contraction of the ranges of various organisms over the millennia. They are finding that geographic distribution plays a key role in speciation. Future studies should continue to reveal large-scale patterns of distribution and perhaps shed more light on the origins of mass extinctions and the effects of these catastrophes on the evolution of new species. From field studies of plants and animals, researchers have learned that habitat can influence morphology and behavior--particularly sexual selection--in ways that hasten or slow down speciation. Evolutionary biologists have also discovered that speciation can stall out, for example, as separated populations become reconnected, homogenizing genomes that would otherwise diverge. Molecular forces, such as low mutation rates or meiotic drive--in which certain alleles have an increased likelihood of being passed from one generation to the next--influence the rate of speciation. And in some cases, differences in diversity can vary within an ecosystem: Edges of ecosystems sometimes support fewer species than the interior. Evolutionary biologists are just beginning to sort out how all these factors are intertwined in different ways for different groups of organisms. The task is urgent: Figuring out what shapes diversity could be important for understanding the nature of the wave of extinctions the world is experiencing and for determining strategies to mitigate it. _________________________________________________________________ What Genetic Changes Made Us Uniquely Human? Elizabeth Culotta Every generation of anthropologists sets out to explore what it is that makes us human. Famed paleoanthropologist Louis Leakey thought tools made the man, and so when he uncovered hominid bones near stone tools in Tanzania in the 1960s, he labeled the putative toolmaker Homo habilis, the earliest member of the human genus. But then primatologist Jane Goodall demonstrated that chimps also use tools of a sort, and today researchers debate whether H. habilis truly belongs in Homo. Later studies have honed in on traits such as bipedality, culture, language, humor, and, of course, a big brain as the unique birthright of our species. Yet many of these traits can also be found, at least to some degree, in other creatures: Chimps have rudimentary culture, parrots speak, and some rats seem to giggle when tickled. What is beyond doubt is that humans, like every other species, have a unique genome shaped by our evolutionary history. Now, for the first time, scientists can address anthropology's fundamental question at a new level: What are the genetic changes that make us human? With the human genome in hand and primate genome data beginning to pour in, we are entering an era in which it may become possible to pinpoint the genetic changes that help separate us from our closest relatives. A rough draft of the chimp sequence has already been released, and a more detailed version is expected soon. The genome of the macaque is nearly complete, the orangutan is under way, and the marmoset was recently approved. All these will help reveal the ancestral genotype at key places on the primate tree. The genetic differences revealed between humans and chimps are likely to be profound, despite the oft-repeated statistic that only about 1.2% of our DNA differs from that of chimps. A change in every 100th base could affect thousands of genes, and the percentage difference becomes much larger if you count insertions and deletions. Even if we document all of the perhaps 40 million sequence differences between humans and chimps, what do they mean? Many are probably simply the consequence of 6 million years of genetic drift, with little effect on body or behavior, whereas other small changes--perhaps in regulatory, noncoding sequences--may have dramatic consequences. Half of the differences might define a chimp rather than a human. How can we sort them all out? One way is to zero in on the genes that have been favored by natural selection in humans. Studies seeking subtle signs of selection in the DNA of humans and other primates have identified dozens of genes, in particular those involved in host-pathogen interactions, reproduction, sensory systems such as olfaction and taste, and more. But not all of these genes helped set us apart from our ape cousins originally. Our genomes reveal that we have evolved in response to malaria, but malaria defense didn't make us human. So some researchers have started with clinical mutations that impair key traits, then traced the genes' evolution, an approach that has identified a handful of tantalizing genes. For example, MCPH1 and ASPM cause microcephaly when mutated, FOXP2 causes speech defects, and all three show signs of selection pressure during human, but not chimp, evolution. Thus they may have played roles in the evolution of humans' large brains and speech. But even with genes like these, it is often difficult to be completely sure of what they do. Knockout experiments, the classic way to reveal function, can't be done in humans and apes for ethical reasons. Much of the work will therefore demand comparative analyses of the genomes and phenotypes of large numbers of humans and apes. Already, some researchers are pushing for a "great ape 'phenome' project" to match the incoming tide of genomic data with more phenotypic information on apes. Other researchers argue that clues to function can best be gleaned by mining natural human variability, matching mutations in living people to subtle differences in biology and behavior. Both strategies face logistical and ethical problems, but some progress seems likely. A complete understanding of uniquely human traits will, however, include more than DNA. Scientists may eventually circle back to those long-debated traits of sophisticated language, culture, and technology, in which nurture as well as nature plays a leading role. We're in the age of the genome, but we can still recognize that it takes much more than genes to make the human. _________________________________________________________________ How Are Memories Stored and Retrieved? Greg Miller Packed into the kilogram or so of neural wetware between the ears is everything we know: a compendium of useful and trivial facts about the world, the history of our lives, plus every skill we've ever learned, from riding a bike to persuading a loved one to take out the trash. Memories make each of us unique, and they give continuity to our lives. Understanding how memories are stored in the brain is an essential step toward understanding ourselves. Neuroscientists have already made great strides, identifying key brain regions and potential molecular mechanisms. Still, many important questions remain unanswered, and a chasm gapes between the molecular and whole-brain research. The birth of the modern era of memory research is often pegged to the publication, in 1957, of an account of the neurological patient H.M. At age 27, H.M. had large chunks of the temporal lobes of his brain surgically removed in a last-ditch effort to relieve chronic epilepsy. The surgery worked, but it left H.M. unable to remember anything that happened--or anyone he met--after his surgery. The case showed that the medial temporal lobes (MTL), which include the hippocampus, are crucial for making new memories. H.M.'s case also revealed, on closer examination, that memory is not a monolith: Given a tricky mirror drawing task, H.M.'s performance improved steadily over 3 days even though he had no memory of his previous practice. Remembering how is not the same as remembering what, as far as the brain is concerned. Thanks to experiments on animals and the advent of human brain imaging, scientists now have a working knowledge of the various kinds of memory as well as which parts of the brain are involved in each. But persistent gaps remain. Although the MTL has indeed proved critical for declarative memory--the recollection of facts and events--the region remains something of a black box. How its various components interact during memory encoding and retrieval is unresolved. Moreover, the MTL is not the final repository of declarative memories. Such memories are apparently filed to the cerebral cortex for long-term storage, but how this happens, and how memories are represented in the cortex, remains unclear. More than a century ago, the great Spanish neuro-anatomist Santiago Ram?n y Cajal proposed that making memories must require neurons to strengthen their connections with one another. Dogma at the time held that no new neurons are born in the adult brain, so Ram?n y Cajal made the reasonable assumption that the key changes must occur between existing neurons. Until recently, scientists had few clues about how this might happen. Figure 1 Memorable diagram. Santiago Ram?n y Cajal's drawing of the hippocampus. He proposed that memories involve strengthened neural connections. Since the 1970s, however, work on isolated chunks of nervous-system tissue has identified a host of molecular players in memory formation. Many of the same molecules have been implicated in both declarative and nondeclarative memory and in species as varied as sea slugs, fruit flies, and rodents, suggesting that the molecular machinery for memory has been widely conserved. A key insight from this work has been that short-term memory (lasting minutes) involves chemical modifications that strengthen existing connections, called synapses, between neurons, whereas long-term memory (lasting days or weeks) requires protein synthesis and probably the construction of new synapses. Tying this work to the whole-brain research is a major challenge. A potential bridge is a process called long-term potentiation (LTP), a type of synaptic strengthening that has been scrutinized in slices of rodent hippocampus and is widely considered a likely physiological basis for memory. A conclusive demonstration that LTP really does underlie memory formation in vivo would be a big breakthrough. Meanwhile, more questions keep popping up. Recent studies have found that patterns of neural activity seen when an animal is learning a new task are replayed later during sleep. Could this play a role in solidifying memories? Other work shows that our memories are not as trustworthy as we generally assume. Why is memory so labile? A hint may come from recent studies that revive the controversial notion that memories are briefly vulnerable to manipulation each time they're recalled. Finally, the no-new-neurons dogma went down in flames in the 1990s, with the demonstration that the hippocampus, of all places, is a virtual neuron nursery throughout life. The extent to which these newborn cells support learning and memory remains to be seen. _________________________________________________________________ How Did Cooperative Behavior Evolve? Elizabeth Pennisi When Charles Darwin was working out his grand theory on the origin of species, he was perplexed by the fact that animals from ants to people form social groups in which most individuals work for the common good. This seemed to run counter to his proposal that individual fitness was key to surviving over the long term. By the time he wrote The Descent of Man, however, he had come up with a few explanations. He suggested that natural selection could encourage altruistic behavior among kin so as to improve the reproductive potential of the "family." He also introduced the idea of reciprocity: that unrelated but familiar individuals would help each other out if both were altruistic. A century of work with dozens of social species has borne out his ideas to some degree, but the details of how and why cooperation evolved remain to be worked out. The answers could help explain human behaviors that seem to make little sense from a strict evolutionary perspective, such as risking one's life to save a drowning stranger. Animals help each other out in many ways. In social species from honeybees to naked mole rats, kinship fosters cooperation: Females forgo reproduction and instead help the dominant female with her young. And common agendas help unrelated individuals work together. Male chimpanzees, for example, gang up against predators, protecting each other at a potential cost to themselves. Generosity is pervasive among humans. Indeed, some anthropologists argue that the evolution of the tendency to trust one's relatives and neighbors helped humans become Earth's dominant vertebrate: The ability to work together provided our early ancestors more food, better protection, and better childcare, which in turn improved reproductive success. However, the degree of cooperation varies. "Cheaters" can gain a leg up on the rest of humankind, at least in the short term. But cooperation prevails among many species, suggesting that this behavior is a better survival strategy, over the long run, despite all the strife among ethnic, political, religious, even family groups now rampant within our species. Evolutionary biologists and animal behavior researchers are searching out the genetic basis and molecular drivers of cooperative behaviors, as well as the physiological, environmental, and behavioral impetus for sociality. Neuroscientists studying mammals from voles to hyenas are discovering key correlations between brain chemicals and social strategies. Others with a more mathematical bent are applying evolutionary game theory, a modeling approach developed for economics, to quantify cooperation and predict behavioral outcomes under different circumstances. Game theory has helped reveal a seemingly innate desire for fairness: Game players will spend time and energy to punish unfair actions, even though there's nothing to be gained by these actions for themselves. Similar studies have shown that even when two people meet just once, they tend to be fair to each other. Those actions are hard to explain, as they don't seem to follow the basic tenet that cooperation is really based on self-interest. The models developed through these games are still imperfect. They do not adequately consider, for example, the effect of emotions on cooperation. Nonetheless, with game theory's increasing sophistication, researchers hope to gain a clearer sense of the rules that govern complex societies. Together, these efforts are helping social scientists and others build on Darwin's observations about cooperation. As Darwin predicted, reciprocity is a powerful fitness tactic. But it is not a pervasive one. Modern researchers have discovered that a good memory is a prerequisite: It seems reciprocity is practiced only by organisms that can keep track of those who are helpful and those who are not. Humans have a great memory for faces and thus can maintain lifelong good--or hard--feelings toward people they don't see for years. Most other species exhibit reciprocity only over very short time scales, if at all. Limited to his personal observations, Darwin was able to come up with only general rationales for cooperative behavior. Now, with new insights from game theory and other promising experimental approaches, biologists are refining Darwin's ideas and, bit by bit, hope that one day they will understand just what it takes to bring out our cooperative spirit. _________________________________________________________________ How Will Big Pictures Emerge From a Sea of Biological Data? Elizabeth Pennisi Biology is rich in descriptive data--and getting richer all the time. Large-scale methods of probing samples, such as DNA sequencing, microarrays, and automated gene-function studies, are filling new databases to the brim. Many subfields from biomechanics to ecology have gone digital, and as a result, observations are more precise and more plentiful. A central question now confronting virtually all fields of biology is whether scientists can deduce from this torrent of molecular data how systems and whole organisms work. All this information needs to be sifted, organized, compiled, and--most importantly--connected in a way that enables researchers to make predictions based on general principles. Enter systems biology. Loosely defined and still struggling to find its way, this newly emerging approach aims to connect the dots that have emerged from decades of molecular, cellular, organismal, and even environmental observations. Its proponents seek to make biology more quantitative by relying on mathematics, engineering, and computer science to build a more rigid framework for linking disparate findings. They argue that it is the only way the field can move forward. And they suggest that biomedicine, particularly deciphering risk factors for disease, will benefit greatly. The field got a big boost from the completion of the human genome sequence. The product of a massive, trip-to-the-moon logistical effort, the sequence is now a hard and fast fact. The biochemistry of human inheritance has been defined and measured. And that has inspired researchers to try to make other aspects of life equally knowable. Molecular geneticists dream of having a similarly comprehensive view of networks that control genes: For example, they would like to identify rules explaining how a single DNA sequence can express different proteins, or varying amounts of protein, in different circumstances (see p. [36]80). Cell biologists would like to reduce the complex communication patterns traced by molecules that regulate the health of the cell to a set of signaling rules. Developmental biologists would like a comprehensive picture of how the embryo manages to direct a handful of cells into a myriad of specialized functions in bone, blood, and skin tissue. These hard puzzles can only be solved by systems biology, proponents say. The same can be said for neuroscientists trying to work out the emergent properties--higher thought, for example--hidden in complex brain circuits. To understand ecosystem changes, including global warming, ecologists need ways to incorporate physical as well as biological data into their thinking. Figure 1 Systems approach. Circuit diagrams help clarify nerve cell functions. Today, systems biologists have only begun to tackle relatively simple networks. They have worked out the metabolic pathway in yeast for breaking down galactose, a carbohydrate. Others have tracked the first few hours of the embryonic development of sea urchins and other organisms with the goal of seeing how various transcription factors alter gene expression over time. Researchers are also developing rudimentary models of signaling networks in cells and simple brain circuits. Progress is limited by the difficulty of translating biological patterns into computer models. Network computer programs themselves are relatively simple, and the methods of portraying the results in ways that researchers can understand and interpret need improving. New institutions around the world are gathering interdisciplinary teams of biologists, mathematicians, and computer specialists to help promote systems biology approaches. But it is still in its early days. No one yet knows whether intensive interdisciplinary work and improved computational power will enable researchers to create a comprehensive, highly structured picture of how life works. _________________________________________________________________ How Far Can We Push Chemical Self-Assembly? Robert F. Service Most physical scientists nowadays focus on uncovering nature's mysteries; chemists build things. There is no synthetic astronomy or synthetic physics, at least for now. But chemists thrive on finding creative new ways to assemble molecules. For the last 100 years, they have done that mostly by making and breaking the strong covalent bonds that form when atoms share electrons. Using that trick, they have learned to combine as many as 1000 atoms into essentially any molecular configuration they please. Impressive as it is, this level of complexity pales in comparison to what nature flaunts all around us. Everything from cells to cedar trees is knit together using a myriad of weaker links between small molecules. These weak interactions, such as hydrogen bonds, van der Waals forces, and [pi.gif] - [pi.gif] interactions, govern the assembly of everything from DNA in its famous double helix to the bonding of H[2]O molecules in liquid water. More than just riding herd on molecules, such subtle forces make it possible for structures to assemble themselves into an ever more complex hierarchy. Lipids coalesce to form cell membranes. Cells organize to form tissues. Tissues combine to create organisms. Today, chemists can't approach the complexity of what nature makes look routine. Will they ever learn to make complex structures that self-assemble? Well, they've made a start. Over the past 3 decades, chemists have made key strides in learning the fundamental rules of noncovalent bonding. Among these rules: Like prefers like. We see this in hydrophobic and hydrophilic interactions that propel lipid molecules in water to corral together to form the two-layer membranes that serve as the coatings surrounding cells. They bunch their oily tails together to avoid any interaction with water and leave their more polar head groups facing out into the liquid. Another rule: Self-assembly is governed by energetically favorable reactions. Leave the right component molecules alone, and they will assemble themselves into complex ordered structures. Chemists have learned to take advantage of these and other rules to design selfassembling systems with a modest degree of complexity. Drug-carrying liposomes, made with lipid bilayers resembling those in cells, are used commercially to ferry drugs to cancerous tissues in patients. And selfassembled molecules called rotaxanes, which can act as molecular switches that oscillate back and forth between two stable states, hold promise as switches in future molecular-based computers. But the need for increased complexity is growing, driven by the miniaturization of computer circuitry and the rise of nanotechnology. As features on computer chips continue to shrink, the cost of manufacturing these ever-smaller components is skyrocketing. Right now, companies make them by whittling materials down to the desired size. At some point, however, it will become cheaper to design and build them chemically from the bottom up. Self-assembly is also the only practical approach for building a wide variety of nanostructures. Making sure the components assemble themselves correctly, however, is not an easy task. Because the forces at work are so small, self-assembling molecules can get trapped in undesirable conformations, making defects all but impossible to avoid. Any new system that relies on self-assembly must be able either to tolerate those defects or repair them. Again, biology offers an example in DNA. When enzymes copy DNA strands during cell division, they invariably make mistakes--occasionally inserting an A when they should have inserted a T, for example. Some of those mistakes get by, but most are caught by DNA-repair enzymes that scan the newly synthesized strands and correct copying errors. Strategies like that won't be easy for chemists to emulate. But if they want to make complex, ordered structures from the ground up, they'll have to get used to thinking a bit more like nature. _________________________________________________________________ What Are the Limits of Conventional Computing? Charles Seife At first glance, the ultimate limit of computation seems to be an engineering issue. How much energy can you put in a chip without melting it? How fast can you flip a bit in your silicon memory? How big can you make your computer and still fit it in a room? These questions don't seem terribly profound. In fact, computation is more abstract and fundamental than figuring out the best way to build a computer. This realization came in the mid-1930s, when Princeton mathematicians Alonzo Church and Alan Turing showed--roughly speaking--that any calculation involving bits and bytes can be done on an idealized computer known as a Turing machine. By showing that all classical computers are essentially alike, this discovery enabled scientists and mathematicians to ask fundamental questions about computation without getting bogged down in the minutiae of computer architecture. For example, theorists can now classify computational problems into broad categories. P problems are those, broadly speaking, that can be solved quickly, such as alphabetizing a list of names. NP problems are much tougher to solve but relatively easy to check once you've reached an answer. An example is the traveling salesman problem, finding the shortest possible route through a series of locations. All known algorithms for getting an answer take lots of computing power, and even relatively small versions might be out of reach of any classical computer. Mathematicians have shown that if you could come up with a quick and easy shortcut to solving any one of the hardest type of NP problems, you'd be able to crack them all. In effect, the NP problems would turn into P problems. But it's uncertain whether such a shortcut exists--whether P = NP. Scientists think not, but proving this is one of the great unanswered questions in mathematics. In the 1940s, Bell Labs scientist Claude Shannon showed that bits are not just for computers; they are the fundamental units of describing the information that flows from one object to another. There are physical laws that govern how fast a bit can move from place to place, how much information can be transferred back and forth over a given communications channel, and how much energy it takes to erase a bit from memory. All classical information-processing machines are subject to these laws--and because information seems to be rattling back and forth in our brains, do the laws of information mean that our thoughts are reducible to bits and bytes? Are we merely computers? It's an unsettling thought. But there is a realm beyond the classical computer: the quantum. The probabilistic nature of quantum theory allows atoms and other quantum objects to store information that's not restricted to only the binary 0 or 1 of information theory, but can also be 0 and 1 at the same time. Physicists around the world are building rudimentary quantum computers that exploit this and other quantum effects to do things that are provably impossible for ordinary computers, such as finding a target record in a database with too few queries. But scientists are still trying to figure out what quantum-mechanical properties make quantum computers so powerful and to engineer quantum computers big enough to do something useful. By learning the strange logic of the quantum world and using it to do computing, scientists are delving deep into the laws of the subatomic world. Perhaps something as seemingly mundane as the quest for computing power might lead to a newfound understanding of the quantum realm. _________________________________________________________________ Can We Selectively Shut Off Immune Responses? Jon Cohen In the past few decades, organ transplantation has gone from experimental to routine. In the United States alone, more than 20,000 heart, liver, and kidney transplants are performed every year. But for transplant recipients, one prospect has remained unchanged: a lifetime of taking powerful drugs to suppress the immune system, a treatment that can have serious side effects. Researchers have long sought ways to induce the immune system to tolerate a transplant without blunting the body's entire defenses, but so far, they have had limited success. They face formidable challenges. Although immune tolerance can occur--in rare cases, transplant recipients who stop taking immunosuppressants have not rejected their foreign organs--researchers don't have a clear picture of what is happening at the molecular and cellular levels to allow this to happen. Tinkering with the immune system is also a bit like tinkering with a mechanical watch: Fiddle with one part, and you may disrupt the whole mechanism. And there is a big roadblock to testing drugs designed to induce tolerance: It is hard to know if they work unless immunosuppressant drugs are withdrawn, and that would risk rejection of the transplant. But if researchers can figure out how to train the immune system to tolerate transplants, the knowledge could have implications for the treatment of autoimmune diseases, which also result from unwanted immune attack--in these cases on some of the body's own tissues. A report in Science 60 years ago fired the starting gun in the race to induce transplant tolerance--a race that has turned into a marathon. Ray Owen of the University of Wisconsin, Madison, reported that fraternal twin cattle sometimes share a placenta and are born with each other's red blood cells, a state referred to as mixed chimerism. The cattle tolerated the foreign cells with no apparent problems. A few years later, Peter Medawar and his team at the University of Birmingham, U.K., showed that fraternal twin cattle with mixed chimerism readily accept skin grafts from each other. Medawar did not immediately appreciate the link to Owen's work, but when he saw the connection, he decided to inject fetal mice in utero with tissue from mice of a different strain. In a publication in Nature in 1953, the researchers showed that, after birth, some of these mice tolerated skin grafts from different strains. This influential experiment led many to devote their careers to transplantation and also raised hopes that the work would lead to cures for autoimmune diseases. Immunologists, many of them working with mice, have since spelled out several detailed mechanisms behind tolerance. The immune system can, for example, dispatch "regulatory" cells that suppress immune attacks against self. Or the system can force harmful immune cells to commit suicide or to go into a dysfunctional stupor called anergy. Researchers indeed now know fine details about the genes, receptors, and cell-to-cell communications that drive these processes. Yet it's one matter to unravel how the immune system works and another to figure out safe ways to manipulate it. Transplant researchers are pursuing three main strategies to induce tolerance. One builds on Medawar's experiments by trying to exploit chimerism. Researchers infuse the patient with the organ donor's bone marrow in hopes that the donor's immune cells will teach the host to tolerate the transplant; donor immune cells that come along with the transplanted organ also, some contend, can teach tolerance. A second strategy uses drugs to train T cells to become anergic or commit suicide when they see the foreign antigens on the transplanted tissue. The third approach turns up production of T regulatory cells, which prevent specific immune cells from copying themselves and can also suppress rejection by secreting biochemicals called cytokines that direct the immune orchestra to change its tune. All these strategies face a common problem: It is maddeningly diff icult to judge whether the approach has failed or succeeded because there are no reliable "biomarkers" that indicate whether a person has become tolerant to a transplant. So the only way to assess tolerance is to stop drug treatment, which puts the patient at risk of rejecting the organ. Similarly, ethical concerns often require researchers to test drugs aimed at inducing tolerance in concert with immunosuppressive therapy. This, in turn, can undermine the drugs' effectiveness because they need a fully functioning immune system to do their job. If researchers can complete their 50-year quest to induce immune tolerance safely and selectively, the prospects for hundreds of thousands of transplant recipients would be greatly improved, and so, too, might the prospects for controlling autoimmune diseases. _________________________________________________________________ Do Deeper Principles Underlie Quantum Uncertainty and Nonlocality? Charles Seife "Quantum mechanics is very impressive," Albert Einstein wrote in 1926. "But an inner voice tells me that it is not yet the real thing." As quantum theory matured over the years, that voice has gotten quieter--but it has not been silenced. There is a relentless murmur of confusion underneath the chorus of praise for quantum theory. Quantum theory was born at the very end of the 19th century and soon became one of the pillars of modern physics. It describes, with incredible precision, the bizarre and counterintuitive behavior of the very small: atoms and electrons and other wee beasties of the submicroscopic world. But that success came with the price of discomfort. The equations of quantum mechanics work very well; they just don't seem to make sense. No matter how you look at the equations of quantum theory, they allow a tiny object to behave in ways that defy intuition. For example, such an object can be in "superposition": It can have two mutually exclusive properties at the same time. The mathematics of quantum theory says that an atom, for example, can be on the left side of a box and the right side of the box at the very same instant, as long as the atom is undisturbed and unobserved. But as soon as an observer opens the box and tries to spot where the atom is, the superposition collapses and the atom instantly "chooses" whether to be on the right or the left. This idea is almost as unsettling today as it was 80 years ago, when Erwin Schr?dinger ridiculed superposition by describing a half living, half-dead cat. That is because quantum theory changes what the meaning of "is" is. In the classical world, an object has a solid reality: Even a cloud of gas is well described by hard little billiard ball-like pieces, each of which has a well-defined position and velocity. Quantum theory seems to undermine that solid reality. Indeed, the famous Uncertainty Principle, which arises directly from the mathematics of quantum theory, says that objects' positions and moment a are smeary and ill defined, and gaining knowledge about one implies losing knowledge about the other. The early quantum physicists dealt with this unreality by saying that the "is"--the fundamental objects handled by the equations of quantum theory--were not actually particles that had an extrinsic reality but "probability waves" that merely had the capability of becoming "real" when an observer makes a measurement. This so-called Copenhagen Interpretation makes sense, if you're willing to accept that reality is probability waves and not solid objects. Even so, it still doesn't sufficiently explain another weirdness of quantum theory: nonlocality. In 1935, Einstein came up with a scenario that still defies common sense. In his thought experiment, two particles fly away from each other and wind up at opposite ends of the galaxy. But the two particles happen to be "entangled"--linked in a quantum-mechanical sense--so that one particle instantly "feels" what happens to its twin. Measure one, and the other is instantly transformed by that measurement as well; it's as if the twins mystically communicate, instantly, over vast regions of space. This "nonlocality" is a mathematical consequence of quantum theory and has been measured in the lab. The spooky action apparently ignores distance and the flow of time; in theory, particles can be entangled after their entanglement has already been measured. On one level, the weirdness of quantum theory isn't a problem at all. The mathematical framework is sound and describes all these bizarre phenomena well. If we humans can't imagine a physical reality that corresponds to our equations, so what? That attitude has been called the "shut up and calculate" interpretation of quantum mechanics. But to others, our difficulties in wrapping our heads around quantum theory hint at greater truths yet to be understood. Some physicists in the second group are busy trying to design experiments that can get to the heart of the weirdness of quantum theory. They are slowly testing what causes quantum superpositions to "collapse"--research that may gain insight into the role of measurement in quantum theory as well as into why big objects behave so differently from small ones. Others are looking for ways to test various explanations for the weirdnesses of quantum theory, such as the "many worlds" interpretation, which explains superposition, entanglement, and other quantum phenomena by positing the existence of parallel universes. Through such efforts, scientists might hope to get beyond the discomfort that led Einstein to declare that "[God] does not play dice." _________________________________________________________________ Is an Effective HIV Vaccine Feasible? Jon Cohen In the 2 decades since researchers identified HIV as the cause of AIDS, more money has been spent on the search for a vaccine against the virus than on any vaccine effort in history. The U.S. National Institutes of Health alone invests nearly $500 million each year, and more than 50 different preparations have entered clinical trials. Yet an effective AIDS vaccine, which potentially could thwart millions of new HIV infections each year, remains a distant dream. Although AIDS researchers have turned the virus inside-out and carefully detailed how it destroys the immune system, they have yet to unravel which immune responses can fend off an infection. That means, as one AIDS vaccine researcher famously put it more than a decade ago, the field is "flying without a compass." Some skeptics contend that no vaccine will ever stop HIV. They argue that the virus replicates so quickly and makes so many mistakes during the process that vaccines can't possibly fend off all the types of HIV that exist. HIV also has developed sophisticated mechanisms to dodge immune attack, shrouding its surface protein in sugars to hide vulnerable sites from antibodies and producing proteins that thwart production of other immune warriors. And the skeptics point out that vaccine developers have had little success against pathogens like HIV that routinely outwit the immune system--the malaria parasite, hepatitis C virus, and the tuberculosis bacillus are prime examples. Yet AIDS vaccine researchers have solid reasons to believe they can succeed. Monkey experiments have shown that vaccines can protect animals from SIV, a simian relative of HIV. Several studies have identified people who repeatedly expose themselves to HIV but remain uninfected, suggesting that something is stopping the virus. A small percentage of people who do become infected never seem to suffer any harm, and others hold the virus at bay for a decade or more before showing damage to their immune systems. Scientists also have found that some rare antibodies do work powerfully against the virus in test tube experiments. At the start, researchers pinned their hopes on vaccines designed to trigger production of antibodies against HIV's surface protein. The approach seemed promising because HIV uses the surface protein to latch onto white blood cells and establish an infection. But vaccines that only contained HIV's surface protein looked lackluster in animal and test tube studies, and then proved worthless in large-scale clinical trials. Now, researchers are intensely investigating other approaches. When HIV manages to thwart antibodies and establish an infection, a second line of defense, cellular immunity, specifically targets and eliminates HIV-infected cells. Several vaccines which are now being tested aim to stimulate production of killer cells, the storm troopers of the cellular immune system. But cellular immunity involves other players--such as macrophages, the network of chemical messengers called cytokines, and so-called natural killer cells--that have received scant attention. The hunt for an antibody-based vaccine also is going through something of a renaissance, although it's requiring researchers to think backward. Vaccine researchers typically start with antigens--in this case, pieces of HIV--and then evaluate the antibodies they elicit. But now researchers have isolated more than a dozen antibodies from infected people that have blocked HIV infection in test tube experiments. The trick will be to figure out which specific antigens triggered their production. It could well be that a successful AIDS vaccine will need to stimulate both the production of antibodies and cellular immunity, a strategy many are attempting to exploit. Perhaps the key will be stimulating immunity at mucosal surfaces, where HIV typically enters. It's even possible that researchers will discover an immune response that no one knows about today. Or perhaps the answer lies in the interplay between the immune system and human genetic variability: Studies have highlighted genes that strongly influence who is most susceptible--and who is most resistant--to HIV infection and disease. Wherever the answer lies, the insights could help in the development of vaccines against other diseases that, like HIV, don't easily succumb to immune attack and that kill millions of people. Vaccine developers for these diseases will probably also have to look in unusual places for answers. The maps created by AIDS vaccine researchers currently exploring uncharted immunologic terrain could prove invaluable. From shovland at mindspring.com Sun Jul 3 15:36:59 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 3 Jul 2005 08:36:59 -0700 Subject: [Paleopsych] when the body runs riot Message-ID: <01C57FAA.624341F0.shovland@mindspring.com> The body has mechanism for maintaining homeostatis, but those can be overwhelmed, resulting in disease. The genes are blueprints for making healthy cells, but they don't work unless they get the correct inputs. Steve Hovland www.stevehovland.net -----Original Message----- From: Christian Rauh [SMTP:christian.rauh at uconn.edu] Sent: Sunday, July 03, 2005 7:40 AM To: The new improved paleopsych list Subject: Re: [Paleopsych] when the body runs riot << File: ATT00000.txt; charset = UTF-8 >> From shovland at mindspring.com Sun Jul 3 17:54:31 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 3 Jul 2005 10:54:31 -0700 Subject: [Paleopsych] Bio-Entanglement Message-ID: <01C57FBD.98559820.shovland@mindspring.com> >From Entangled Minds, by Dean Radin, PDF available online at the url below. "In physics, the idea of entanglement -- the quantum theory prediction that under certain circumstances particles that appear to be isolated are actually instantaneously connected through space and time -- is not only known to be demonstrably real, but is far more pervasive and robust than anyone had imagined even a few years ago.Devising new forms of entanglement has become a central focus in the accelerating race towards developing practical quantum computers. The growing pressure to develop workable quantum computers is rapidly expanding our ability to create ever more robust forms of entanglement in increasingly complex systems, for longer lifetimes, and at room temperature. Researchers will discover that under certain conditions, living cells also exhibit properties associated with quantum entanglement. Then the idea of bioentanglement will emerge, a concept that is more general than today's special cases of entanglement involving inanimate particles and photons. Someone will get a bright idea and ask,"I wonder what would happen if two brains were entangled? Would they show correlated behavior at a distance, just like other forms of bioentanglement? Then someone will ask,"I wonder what it would feel like when my brain is entangled with the outside world? Are mind fields bioentangled with the rest of the universe?" http://www.noetic.org/publications/shift/issue5/s5_radin.pdf From checker at panix.com Mon Jul 4 01:27:11 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 21:27:11 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Radical Evolution' and 'More Than Human': The Incredibles Message-ID: 'Radical Evolution' and 'More Than Human': The Incredibles New York Times Book Review, 5.7.3 http://www.nytimes.com/2005/07/03/books/review/03PAULL.html [First chapter of More than Human appended.] RADICAL EVOLUTION The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human. By Joel Garreau. 384 pp. Doubleday. $26. MORE THAN HUMAN Embracing the Promise of Biological Enhancement. By Ramez Naam. 276 pp. Broadway Books. $24.95. By ANNIE MURPHY PAUL ''This book can't begin with the tale of the telekinetic monkey.'' So opens Joel Garreau's captivating, occasionally brilliant and often exasperating ''Radical Evolution.'' Garreau, a reporter and editor at The Washington Post and the author of the influential work of social demography ''Edge City,'' acknowledges his authorial choice is a sacrifice. After all, ''how often does someone writing nonfiction get to lead with a monkey who can move objects with her thoughts?'' But to begin his book about the technological enhancement of the human mind and body with this kind of gee-whiz gimmick would send a misleading signal. Garreau makes it clear he's more interested in people than in machines. Readers will be grateful, since an airless sterility often creeps into books like ''Radical Evolution,'' which is focused on the near future. In the next generation or two, Garreau writes, advances in genetics, robotics, information technology and nanotechnology (the science that permits the construction of infinitesimally tiny devices) may allow us to raise our intelligence, refine our bodies and even become immortal -- or they could lead to a ruinous disruption of our individual identities and shared institutions, and if things go really wrong, to the total destruction of humanity. Unless you've cultivated a taste for the hypothetical, the situations mapped out here, in which computers take over, can become so much numbing science fiction. Wisely, Garreau devotes himself to embedding these unfamiliar technologies in a human context. We meet researchers from the federal government's mysterious Defense Advanced Research Projects Agency, now engineering soldiers who don't need sleep and who can stop a wound from bleeding just by thinking about it. We spend time with scientists at a biotechnology firm called Functional Genetics, engaged in research on ''anti-infectives'' that could one day make humans invulnerable to AIDS, Alzheimer's and cancer. Garreau focuses on three camps of thinkers who have paused to contemplate the future. The first espouse what Garreau terms the ''Heaven Scenario.'' They believe enhancement technology will allow us to live forever in perfect happiness without pain, more or less. The most vigorous advocate of what one skeptic calls ''techno-exuberance'' is Ray Kurzweil, an inventor and futurist. ''I'm not planning to die,'' Kurzweil announces. Instead, he speculates that humans will one day upload the contents of their brains to a computer and shed their physical bodies altogether. Set opposite Kurzweil and his buoyancy is Bill Joy, a founder of Sun Microsystems, whose musings tend toward the apocalyptic. Well known for his dire warnings about the growing power of technology, the misnamed Joy represents what Garreau calls the ''Hell Scenario.'' Joy speculates that we may meet an undignified end in ''gray goo,'' a scenario in which self-replicating devices designed to improve our bodies and minds instead take on a life of their own, becoming ''too tough, too small and too rapidly spreading to stop.'' They may, Joy continues, eventually ''suck everything vital out of all living things, reducing their husks to ashy mud in a matter of days.'' Things really get interesting when Garreau meets up with Jaron Lanier, a computer scientist and originator of the concept of virtual reality. Lanier foresees neither nirvana nor apocalypse, but a future in which every technological crisis is met and matched by our own ingenuity and resilience. Garreau christens this the ''Prevail Scenario,'' and confesses his personal preference for this vision animated by what he calls his ''faith in human cussedness.'' Heaven and hell share the same story line, he writes: ''We are in for revolutionary change; there's not much we can do about it; hang on tight; the end. The Prevail Scenario, if nothing else, has better literary qualities.'' Garreau's style often takes the form of a notebook dump, in which he deposits his assorted jottings directly onto the page. Sometimes the results are stultifying, but when the subject has a mind as original as Lanier's, they're enthralling. Lanier's reflections are at once whimsical and serious: What if we could project our thoughts and feelings so that they were instantly visible to others? What if our superintelligent machines are felled by a Windows crash, just as they're about to take over? To read Garreau's dazzling, disorderly book is to be thrust into a bewildering new world, where ambiguity rules and familiar signposts are few. As he observes, ''by the time the future has all its wires carefully tucked away in a nice metal box where you can no longer see the gaffer tape, it is no longer the future.'' Whereas Garreau's portraits make it clear that ideas about the future are always idiosyncratic and subjective, rooted as much in emotional need as in rational analysis, there's no such nuance in Ramez Naam's ''More Than Human.'' Naam, a professional technologist who helped develop Microsoft Outlook and Microsoft Internet Explorer, is a relentlessly positive pitchman, unburdened by doubt or complexity. But his conception of our enhanced future looks less like Kurzweil's sunny utopia and more like a fluorescent-lighted superstore, in which we roam the aisles selecting from displays of brain implants and anti-aging pills. To Naam, the technological augmentation of our minds and bodies is not an ethical or philosophical question but just one more consumer choice. Accordingly, his main concern is with governmental interference in the free market for such devices. People should be allowed to make up their own minds about enhancements, Naam argues, since ''millions of individuals weighing costs and benefits have a greater collective intelligence, better collective judgment, than a small number of centralized regulators and controllers.'' Never mind that we don't allow citizens' ''collective judgment'' to decide which drugs are safe; that's why we have the F.D.A. Expert guidance, based on long-term, large-scale research, would seem even more essential in the case of activities like germline genetic engineering, which permanently changes the genetic code of an individual and all his or her descendants. Naam's other targets are those who seek to slow or even arrest research on biotechnology. Though these objectors span the ideological spectrum -- from Bill McKibben, the liberal author of ''Enough,'' to Leon R. Kass, the conservative chairman of the President's Council on Bioethics -- Naam lumps them all together as curmudgeonly sticks in the mud, ''advocates of the biological status quo.'' Yet just one page earlier Naam talks up the wonders of ''keeping people young longer'' through science. He seems not to notice that eternal youth -- along with faultless functioning, perpetual fertility and unfailingly pleasant mood -- is its own kind of frozen status quo. In fact, there's something peculiarly adolescent about the blend of narcissism, self-indulgence and lust for control that appears to motivate this quest to become ''more than human.'' Naam's book fails to grapple adequately with the consequences that may follow if, through technology, some of these limits are lifted. In hailing a drug that makes long-married couples feel like newlyweds again, or a neural prosthesis that allows you to ''turn down the volume'' on your brain's ''empathy centers,'' or gene therapy that bulks up your muscles ''while you're watching television,'' Naam and his fellow enhancement boosters seem unwilling to reckon with the fact that the same limits that make life difficult also give it meaning. Annie Murphy Paul is the author of ''The Cult of Personality: How Personality Tests Are Leading Us to Miseducate Our Children, Mismanage Our Companies, and Misunderstand Ourselves.'' --------------- First chapter of 'More Than Human' http://www.nytimes.com/2005/07/03/books/chapters/0703-1st-naam.html By RAMEZ NAAM In 1989, Raj and Van DeSilva were desperate. Their daughter Ashanti, just four, was dying. She was born with a crippled immune system, a consequence of a problem in her genes. Every human being has around thirty thousand genes. In fact, we have two copies of each of those genes-one inherited from our mother, the other from our father. Our genes tell our cells what proteins to make, and when. Each protein is a tiny molecular machine. Every cell in your body is built out of millions of these little machines, working together in precise ways. Proteins break down food, ferry energy to the right places, and form scaffoldings that maintain cell health and structure. Some proteins synthesize messenger molecules to pass signals in the brain, and other proteins form receptors to receive those signals. Even the machines inside each of your cells that build new proteins-called ribosomes-are themselves made up of other proteins. Ashanti DeSilva inherited two broken copies of the gene that contains the instructions for manufacturing a protein called adenoside deaminase (ADA). If she had had just one broken copy, she would have been fine. The other copy of the gene would have made up the difference. With two broken copies, her body didn't have the right instructions to manufacture ADA at all. ADA plays a crucial role in our resistance to disease. Without it, special white blood cells called T cells die off. Without T cells, ADA-deficient children are wide open to the attacks of viruses and bacteria. These children have what's called severe combined immune deficiency (SCID) disorder, more commonly known as bubble boy disease. To a person with a weak immune system, the outside world is threatening. Everyone you touch, share a glass with, or share the same air with is a potential source of dangerous pathogens. Lacking the ability to defend herself, Ashanti was largely confined to her home. The standard treatment for ADA deficiency is frequent injections of PEG-ADA, a synthetic form of the ADA enzyme. PEG-ADA can mean the difference between life and death for an ADA-deficient child. Unfortunately, although it usually produces a rapid improvement when first used, children tend to respond less and less to the drug each time they receive a dose. Ashanti DeSilva started receiving PEG-ADA injections at the age of two, and initially she responded well. Her T-cell count rose sharply and she developed some resistance to disease. But by the age of four, she was slipping away, no longer responding strongly to her injections. If she was to live, she'd need something more than PEG-ADA. The only other option at the time, a bone-marrow transplant, was ruled out by the lack of matching donors. In early 1990, while Ashanti's parents were searching frantically for help, French Anderson, a geneticist at the National Institutes of Health, was seeking permission to perform the first gene-therapy trials on humans. Anderson, an intense fifth-degree blackbelt in tae kwon do and respected researcher in the field of genetics, wanted to show that he could treat genetic diseases caused by faulty copies of genes by inserting new, working copies of the same gene. Scientists had already shown that it was possible to insert new genes into plants and animals. Genetic engineering got its start in 1972, when geneticists Stanley Cohen and Herbert Boyer first met at a scientific conference in Hawaii on plasmids, small circular loops of extra chromosomal DNA in which bacteria carry their genes. Cohen, then a professor at Stanford, had been working on ways to insert new plasmids into bacteria. Researchers in Boyer's lab at the University of California in San Francisco had recently discovered restriction enzymes, molecular tools that could be used to slice and dice DNA at specific points. Over hot pastrami and corned-beef sandwiches, the two Californian researchers concluded that their technologies complemented one another. Boyer's restriction enzymes could isolate specific genes, and Cohen's techniques could then deliver them to bacteria. Using both techniques researchers could alter the genes of bacteria. In 1973, just four months after meeting each other, Cohen and Boyer inserted a new gene into the Escherichia coli bacterium (a regular resident of the human intestine). For the first time, humans were tinkering directly with the genes of another species. The field of genetic engineering was born. Boyer would go on to found Genentech, the world's first biotechnology company. Cohen would go on to win the Nobel Prize in 1986 for his work on cell growth factors. Building on Cohen and Boyer's work with bacteria, hundreds of scientists went on to find ways to insert new genes into plants and animals. The hard work of genetically engineering these higher organisms lies in getting the new gene into the cells. To do this, one needs a gene vector-a way to get the gene to the right place. Most researchers use gene vectors provided by nature: viruses. In some ways, viruses are an ideal tool for ferrying genes into a cell, because penetrating cell walls is already one of their main abilities. Viruses are cellular parasites. Unlike plant or animal cells, or even bacteria, viruses can't reproduce themselves. Instead, they penetrate cells and implant their viral genes; these genes then instruct the cell to make more of the virus, one protein at a time. Early genetic engineers realized that they could use viruses to deliver whatever genes they wanted. Instead of delivering the genes to create more virus, a virus could be modified to deliver a different gene chosen by a scientist. Modified viruses were pressed into service as genetic "trucks," carrying a payload of genes loaded onto them by researchers; these viruses don't spread from cell to cell, because they don't carry the genes necessary for the cell to make new copies of the virus. By the late 1980s, researchers had used this technique to alter the genes of dozens of species of plants and animals-tobacco plants that glow, tomatoes that could survive freezing, corn resistant to pesticides. French Anderson and his colleagues reasoned that one could do the same in a human being. Given a patient who lacked a gene crucial to health, one ought to be able to give that person copies of the missing gene. This is what Anderson proposed to do for Ashanti. Starting in June of 1988, Anderson's proposed clinical protocols, or treatment plans, went through intense scrutiny and generated more than a little hostility. His first protocol was reviewed by both the National Institutes of Health (NIH) and the Food and Drug Administration (FDA). Over a period of seven months, seven regulatory committees conducted fifteen meetings and twenty hours of public hearings to assess the proposal. In early 1990, Anderson and his collaborators received the final approval from the NIH's Recombinant DNA Advisory Committee and had cleared all legal hurdles. By spring, they had identified Ashanti as a potential patient. Would her parents consent to an experimental treatment? Of course there were risks to the therapy, yet without it Ashanti would face a life of seclusion and probably death in the next few years. Given these odds, her parents opted to try the therapy. As Raj DeSilva told the Houston Chronicle, "What choice did we have?" Ashanti and her parents flew to the NIH Clinical Center at Bethesda, Maryland. There, over the course of twelve days, Anderson and his colleagues Michael Blaese and Kenneth Culver slowly extracted some of Ashanti's blood cells. Safely outside the body, the cells had new, working copies of the ADA gene inserted into them by a hollowed-out virus. Finally, starting on the afternoon of September 14, Culver injected the cells back into Ashanti's body. The gene therapy had roughly the same goal as a bone-marrow transplant-to give Ashanti a supply of her own cells that could produce ADA. Unlike a bone-marrow transplant, gene therapy carries no risk of rejection. The cells Culver injected back into Ashanti's bloodstream were her own, so her body recognized them as such. The impact of the gene therapy on Ashanti was striking. Within six months, her T-cell count rose to normal levels. Over the next two years, her health continued to improve, allowing her to enroll in school, venture out of the house, and lead a fairly normal childhood. Ashanti is not completely cured-she still takes a low dose of PEG-ADA. Normally the dose size would increase with the patient's age, but her doses have remained fixed at her four-year-old level. It's possible that she could be taken off the PEG-ADA therapy entirely, but her doctors don't think it's yet worth the risk. The fact that she's alive today-let alone healthy and active-is due to her gene therapy, and also helps prove a crucial point: genes can be inserted into humans to cure genetic diseases. From Healing to Enhancing After Ashanti's treatment, the field of gene therapy blossomed. Since 1990, hundreds of labs have begun experimenting with gene therapy as a technique to cure disease, and more than five hundred human trials involving over four thousand patients have been launched. Researchers have shown that it may be possible to use gene therapy to cure diabetes, sickle-cell anemia, several kinds of cancer, Huntington's disease and even to open blocked arteries. While the goal of gene therapy researchers is to cure disease, gene therapy could also be used to boost human athletic performance. In many cases, the same research that is focused on saving lives has also shown that it can enhance the abilities of animals, with the suggestion that it could enhance men and women as well. Consider the use of gene therapy to combat anemia. Circulating through your veins are trillions of red blood cells. Pumped by your heart, they serve to deliver oxygen from the lungs to the rest of your tissues, and carry carbon dioxide from the tissues back out to the lungs and out of the body. Without enough red blood cells, you can't function. Your muscles can't get enough oxygen to produce force, and your brain can't get enough oxygen to think clearly. Anemia is the name of the condition of insufficient red blood cells. Hundreds of thousands of people worldwide live with anemia, and with the lethargy and weakness that are its symptoms. In the United States, at least eighty-five thousand patients are severely anemic as a result of kidney failure. Another fifty thousand AIDS patients are anemic due to side effects of the HIV drug AZT. In 1985, researchers at Amgen, a biotech company based in Thousand Oaks, California, looking for a way to treat anemia isolated the gene responsible for producing the growth hormone erythropoietin (EPO). Your kidneys produce EPO in response to low levels of oxygen in the blood. EPO in turn causes your body to produce more red blood cells. For a patient whose kidneys have failed, injections of Amgen's synthetic EPO can take up some of the slack. The drug is a lifesaver, so popular that the worldwide market for it is as high as $5 billion per year, and therein lies the problem: the cost of therapy is prohibitive. Three injections of EPO a week is a standard treatment, and patients who need this kind of therapy end up paying $7,000 to $9,000 a year. In poor countries struggling even to pay for HIV drugs like AZT, the added burden of paying for EPO to offset the side effects just isn't feasible. What if there was another way? What if the body could be instructed to produce more EPO on its own, to make up for that lost to kidney failure or AZT? That's the question University of Chicago professor Jeffrey Leiden asked himself in the mid-1990s. In 1997, Leiden and his colleagues performed the first animal study of EPO gene therapy, injecting lab monkeys and mice with a virus carrying an extra copy of the EPO gene. The virus penetrated a tiny proportion of the cells in the mice and monkeys and unloaded the gene copies in them. The cells began to produce extra EPO, causing the animals' bodies to create more red blood cells. In principle, this was no different from injecting extra copies of the ADA gene into Ashanti, except in this case the animals already had two working copies of the EPO gene. The one being inserted into some of their cells was a third copy; if the experiment worked, the animals' levels of EPO production would be boosted beyond the norm for their species. That's just what happened. After just a single injection, the animals began producing more EPO, and their red-blood-cell counts soared. The mice went from a hematocrit of 49 percent (meaning that 49 percent of their blood volume was red blood cells) to 81 percent. The monkeys went from 40 percent to 70 percent. At least two other biotech companies, Chiron and Ariad Gene Therapies, have produced similar results in baboons and monkeys, respectively. The increase in red-blood-cell count is impressive, but the real advantage of gene therapy is in the long-lasting effects. Doctors can produce an increase in red-blood-cell production in patients with injections of EPO itself-but the EPO injections have to be repeated three times a week. EPO gene therapy, on the other hand, could be administered just every few months, or even just once for the patient's entire lifetime. The research bears this out. In Leiden's original experiment, the mice each received just one shot, but showed higher red-blood-cell counts for a year. In the monkeys, the effects lasted for twelve weeks. The monkeys in the Ariad trial, which went through gene therapy more than four years ago, still show higher red-blood-cell counts today. This is a key difference between drug therapy and gene therapy. Drugs sent into the body have an effect for a while, but eventually are broken up or passed out. Gene therapy, on the other hand, gives the body the ability to manufacture the needed protein or enzyme or other chemical itself. The new genes can last for a few weeks or can become a permanent part of the patient's genome. The duration of the effect depends on the kind of gene vector used and where it delivers its payload of DNA. Almost all of the DNA you carry is located on twenty-three pairs of chromosomes that are inside the nuclei of your cells. The nucleus forms a protective barrier that shields your chromosomes from damage. It also contains sophisticated DNA repair mechanisms that patch up most of the damage that does occur. Insertional gene vectors penetrate all the way into the nucleus of the cell and splice the genes they carry into the chromosomes. From that point on, the new genes get all the benefits your other genes enjoy. The new genes are shielded from most of the damage that can happen inside your cells. If the cell divides, the new genes get copied to the daughter cells, just like the rest of your DNA. Insertional vectors make more or less permanent changes to your genome. . . . From checker at panix.com Mon Jul 4 01:27:17 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 21:27:17 -0400 (EDT) Subject: [Paleopsych] Inner Worlds: Brain science and romantic love Message-ID: Brain science and romantic love [This is a dubious site. Links to other articles there below.] THE SENSED PRESENCE AND ROMANTIC LOVE Love seems to be an experience of the 'other.' Even though it's really about ourselves, we experience it as being to do with another person. To see more deeply into it, we need to look at the experience of 'the other' more deeply. There hasn't been a lot of research on the subject. There are many studies that have yielded interesting statistics about how being in love affects academic performance, how it affects the immune system, how it influences the perceived quality of life, and a range of other findings. But the experience itself remains elusive, especially in terms of neurology. There is one line of research that suggests something about the nature of love, and it seems that love is only instance of a larger group of experiences: relating to the 'other'. After looking at the evidence, it seems to me that the 'other' is one's self. I'm thinking of some research into an experience called "The Sensed Presence." Its that feeling people get, usually at night, where they feel that there is someone or something in the room with them, an 'energy' or a 'presence' perhaps. They might feel simply that they are 'not alone' or that they're 'being watched.' There is an indefinable feeling that there is an 'other' of some kind in the room with them. To understand this, we need to look at the self, and not the other. To start, we need to see that the self is more than our ordinary experience shows us. Even in our most quiet moments, when we are still, it can be very hard to see our 'self'. Buddhism teaches that there is no such thing. If they're right, then we're on a wild goose chase in looking for it. Other religions say that the self is God. If these teachings are right, then our self is so elevated that we may have no hope of ever understanding it. Fortunately, brain science is a bit more down-to-earth than religion. There, we have a chance of understanding what the self is, even though the information won't tell us the whole story. The latest 'teachings' from neuroscience tell us that we actually have two selves, one on each side of the brain. And they're specialized. Like anything else in the brain, they each have specific jobs to do. One of them, on the left, is the one that experiences things through language. It's very socialized. Language is mostly a tool for relating to other people, so the 'linguistic' self is very conscious of where it stands with others. Its very sensitive to its social rank, as its reflected in the words of others. A simple string of words from another, like: 'you're fired' or 'I love you', can have an amazing impact on the person hearing them. They'll feel that 'they' are affected by these words. And they, as social beings, have been. When we lose a job, our social rank is reduced. When we start a new relationship, or we can feel that we are secure in a present one, our social rank is raised a bit. The other side of the brain, on the right, also has a self. It experiences the world in non-verbal ways. Its more introspective. Its silent. Its affected by music, art, pictures, and our perceptions of how others feel, rather than what others say. Its more likely to manifest in situations where we aren't able to take the need of others into account. Its usually subordinate; operating underneath the left hemispheric one. It takes this role because the linguistic self is actively interpreting the world and our experiences with words all the time. For most people, this keeps the silent self hidden, so that it operates without out knowing about it consciously. In many ways, the 'conscious' self is the one on the left, with only intermittent input from the one on the right. The Sensed Presence experience occurs when our two senses of self fall out of phase with one another. The subordinate sense of self is experienced directly by the dominant, linguistic one. Because we can't have two senses of self, the intruding silent sense of self is experienced as an external presence, and 'felt' to be happening outside one's self. I believe that the sensed presence also happens when we relate to other people. Some presences mean threats, while others can mean support, comfort or safety. We use our experiences of past states as a repertoire from which we select the state best suited to arising situations, and presences known from the past are projected onto presences encountered in the present. I want to suggest that we are projecting a part of ourselves onto real people whenever we're relating to them. The presences we experience in other people are the creations of our own minds, externalized and projected onto others. Each separate presence will call up a separate state of consciousness, although the differences between many of these states might be slight. From infancy onwards, we have acquaintances; people who are too socially distant to be called friends, but close enough that we must pay some attention to them. The default settings for relating to acquaintances are derived from our own sense of self while we are with such people in the past. Other people are absolutely unique. Some people catch our attention very sharply. We fall in love with them, or we come to hate their guts. These people are not mere aquaintances. Their presence cuts closer to home. Their words, for whatever reason, affects our self-esteem. Because we are such an intensely social species, our self esteem is largely a function of what we think our value is in the eyes of others. Most of the time, people speak to each other in ways that reflect their respect, or lack of it. Respect has a lot to do with social standing and rank. For the most part, we respect ourselves when we feel respected by those around us, even though it doesn't necessarily have to be that way. Our self-esteem changes almost constantly. Most of these changes are experienced through our emotions, although it also has a serious impact on the way a person thinks. Our moods can be elevated and depressed through the words of others. Words like you're hired' and you're fired.' Or, I love you' and leave me alone.' Our moods are directly connected to our level of self-esteem in each moment. In normal conditions, our experience of our selves is sensitive to how we are treated and spoken to by others. Each and every state of consciousness carries its own level of self-esteem. Whether or not one is in a subordinate position in any given situation initiates an appropriate state. The state enables a set of responses that minimize the situation's stress level by fulfilling the expectations of the dominant person in the situation. Each state has its own ways of thinking, feeling, speaking, and acting. Even for the most aware people, its hard to see all these thing happening at once. We live on autopilot, so to speak. If we were to try to make a conscious decision about each way we 'act out' our state of consciousness, we'd crash the system. We have to be on automatic, for the most part, because there are so many controls to adjust for each state. We want positive states to repeat, and to avoid the negative, unpleasant ones. This creates a tendency to bond with people that feel good to be around. Simple, eh? Not really. We 'decide' who feels good according to what we choose to project. And we make these choices largely out of habit. It begins in infancy, when we first begin to experience ourselves as individuals. There has been some research in pre natal psychology that suggests that the fetus experiences its mother's states of consciousness as though they are it's own. Sometime around birth, the newborn begins to experience its own states for the first time. Before birth, the mother gets angry, the fetus experiences the same state, although certainly it will now have very different phenomenological correlates. After birth, when the mother gets angry, the infant no longer experiences it with the same intimacy. The boundaries of the infants new self must be found. In the womb, the fetus probably didn't distinguish between itself and its mother. She must now be experienced as an external presence. For the first time, the ambient chemical environment in the womb is experienced as its mother's smell. Its mother, now experienced as separate from itself, becomes the source from which all its physical and emotional needs are met, almost without exception. Many commentators on the experience of romantic love have argued that the experience of early childhood comfort and nurturing provides a template from which later expectations about relationships are drawn. We begin to feel that our lover ought to treat us much as our mothers did. Women, of course, have the additional process of mapping their senses of comforting, loving presences onto men. In looking for romantic fulfillment, we are looking to find an experience that will change our experience of ourselves. Not by looking for love within ourselves, as so many spiritual teachers suggest, but by allowing a part of our 'self' to manifest through another. When I stop and remember that we're a social species, I cannot help but see it differently. For some people, or at some times in a person's life, 'true happiness' might be found only outside one's self. Our brains and minds are configured for relating to others in so many ways. Humans have a long childhood compared to other primate species, and most of it is spent relying on others to meet their most basic needs. Children are so engaged with the presence of others that they can usually play with anything and imagine it's alive. Children imagine their toys have a presence to them, so that a crayon becomes Mr. Crayon'. The Buddhist faithful imagine that a Buddha statue has the presence of the Enlightened One. The disciple sees God in his Guru. And these are projections. In the same way, lovers project their own loving presence onto their romantic partners. I want to suggest that falling in love is the process of projecting one's right-sided sense of self onto one's beloved. Because the same pathways that are involved in the maintenance of the right-sided, silent sense of self are also specialized for negative feelings, the maintenance of the romantic illusion is delicate at best. It's easily broken, and rarely lasts for more than a few weeks in most cases and a few years in cases where people feel strongly enough to marry. People often want to feel really passionate love before starting a relationship. But that kind usually doesn't last. When it fades, very few people escape disappointment of one kind or another. People are angered when their lover turns out to be who they are instead of who they were supposed to be. Sustaining relationships past this point calls for either denial or relationship and communication skills. I can use a fancy neuro scientific phrase to describe the nature of love (a sustained interhemispheric intrusion), but even I don't enjoy seeing my romantic side reduced to so sterile a set of words. Like the sensed presence experience, being in love happens when the silent sense of self comes out where linguistic sense of self can see it, except that instead of being sensed as a feeling that one is being watched, its projected directly onto the beloved. In the process, the normal division of other and self is blurred. Lovers speak of losing themselves in the other, or that they can't tell where they end and their lover begins. So long as one is able to sustain the illusion that one's partner will be the source of fulfillment, the projection continues undisturbed. It's been said that, when it comes to relationships, everybody is looking for a tailor-made fit, even though its an off-the-rack world. Inevitably, something happens to disturb the illusion. The 'interhemispheric intrusion' ends. The honeymoon is over. 'Hemispheric intrusions' are often very brief events. A vision of an angel might last just a few seconds. The first flush of true love' might continue for only weeks or months. There was a study of the relationship between hemisphericity and self-esteem, and it found that the higher a person's level of right hemispheric 'dominance', the lower their self-esteem. Right hemisphericity means that a person's experience of their self is dominated by their right side. This is the side of the brain that is specialized for both negative feelings and non-verbal ways of processing our experiences. All other conditions being equal, the more intuitive and spontaneous a person is, the lower their self esteem will be. Of course, people compensate in various ways, so that 'all other conditions' usually aren't equal. When a person is in love, their right hemispheric self' has access to the positive emotions on the left. Love feels good. However, after the experience is over, the person finds themselves more vulnerable to fear and sadness in response to things that threaten their sense of self. Such threats occur almost every moment in our lives. Those whose sense of self is mostly derived from the left side are much less vulnerable. They are better able to feel good about themselves even in the face of verbal assaults, but they are also less likely to fall so deeply in love in the first place. The typical aftermath of a mystical experience finds the person feeling somewhat shaky. They will avoid those whose energy' tends to bring them down.' In other words, they won't be able to cope well in many social interactions. They may even retreat into solitude, and avoid relating to others as much as they can. They tend to reject the mind-set that supports the opinions of those whose company they don't enjoy. At the same time, there can be an almost obsessive desire to share' their experience with anyone willing to listen. They seek out validation in the eyes of those around them; 'shouting it from the rooftops', making up for the fragmentation their sense of self sustained in their epiphany. They may cling to those whose company they find supportive. Left hemispheric personalities are judged and labeled using such phrases as there are none so blind as those who will not see.' Ideas about karma are invoked to explain how some just aren't ready to hear the truth.' Someone who is in unrequited love, or is losing a lover they still want to be with, finds themselves in much the same position. They, too, are vulnerable. They also feel that others just don't understand.' Their self esteem falls. They may cling to those who are willing to support them, just like those processing' in the wake of spiritual experiences and awakenings. They may also feel that they are not the same person they were before they experienced their romantic disappointment, just as the religious experient is also a changed' man or woman. In the Sufi tradition, God is referred to as the beloved, and it preserves many metaphors that convey the idea that separation from God is as painful as separation from whoever one is in love with. Union with God is seen as similar to romantic fulfillment. I suggest that romantic love is underpinned by the same brain mechanisms that are involved in the experience of God. While a mystic experience is often short and intense, romantic episodes may last a long time. Both of them involve the silent, right-sided self coming out where the left-sided self can see it, along with intense positive feelings. The after-effects are based on similar neural and psychological mechanisms. The dark night of the soul and the despair of unrequited love are made of the same stuff'. There is some truth in the sayings that the beloved is God, and that when we love God we are loving ourselves. I and thou are one. The other is the self. [5]Shakti - Magnetic Brain Stimulation [6]Deja Vu [7]Darwinian Reincarnation [8]Consciousness [9]Romantic Love and the Brain [10]Origins of spirituality in Human Evolution [11]Sacred Lands [12]"The Sensed Presence" [13]Glasses For Enhanced Visual Acuity [14]God in the Brain [15]Spiritual Aptitude Test [16]Stimulating My BrainAs A Spiritual Path [17]Inventing Shakti [18]Sex_and States of Consciousness [19]The Gay Male Brain - Evolutionary Speculations [20]Visions [21]The Spiritual Personality [22]Enlightenment And the Brain [23]Archetypes [24]A Diet For Epileptics? [25]Odd Experiences - Online Poll Results [26]Brain_News [27]Out-Of-Body Experiences [28]Near-Death Experiences - Thai Case histories [29]The Big Bang [30]Meditations from Brain Science [31]Near-Death Experiences in Thailand - Discussion [32]Downloads [33]The Terrorist Brain [34]Publications by Dr. M.A. Persinger [35]Credentials [36]Hippocrates on Epilepsy References 4. mailto:brainsci at jps.net 5. http://www.innerworlds.50megs.com/winshakti/index.htm 6. http://www.innerworlds.50megs.com/dejavu.htm 7. http://www.innerworlds.50megs.com/rebirth.htm 8. http://www.innerworlds.50megs.com/consciousness.htm 9. http://www.innerworlds.50megs.com/romance.htm 10. http://www.innerworlds.50megs.com/deathanxiety.htm 11. http://www.innerworlds.50megs.com/earthfee.htm 12. http://www.innerworlds.50megs.com/sp.htm 13. http://www.innerworlds.50megs.com/evaglasses.htm 14. http://www.innerworlds.50megs.com/god.htm 15. http://www.innerworlds.50megs.com/anchored_TL_test.htm 16. http://www.innerworlds.50megs.com/me_myTL.htm 17. http://www.innerworlds.50megs.com/neuromag.htm 18. http://www.innerworlds.50megs.com/sex_ascs.htm 19. http://www.innerworlds.50megs.com/gaybrain.htm 20. http://www.innerworlds.50megs.com/Visions.htm 21. http://www.innerworlds.50megs.com/traits.htm 22. http://www.innerworlds.50megs.com/moksha.htm 23. http://www.innerworlds.50megs.com/archetypes.htm 24. http://www.innerworlds.50megs.com/ketogenic.htm 25. http://www.innerworlds.50megs.com/pollresults.htm 26. http://www.innerworlds.50megs.com/brain_news.htm 27. http://www.innerworlds.50megs.com/obe.htm 28. http://www.innerworlds.50megs.com/bkknde.htm 29. http://www.innerworlds.50megs.com/bigbang.htm 30. http://www.innerworlds.50megs.com/neuromed.htm 31. http://www.innerworlds.50megs.com/thaindes.htm 32. http://www.innerworlds.50megs.com/downloads.htm 33. http://www.innerworlds.50megs.com/terrorism.htm 34. http://www.innerworlds.50megs.com/Persinger_pubs.htm 35. http://www.innerworlds.50megs.com/credentials.htm 36. http://www.innerworlds.50megs.com/hippocrat.htm From checker at panix.com Mon Jul 4 01:27:25 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 21:27:25 -0400 (EDT) Subject: [Paleopsych] Inner Worlds: Neurobiology of Religious Terrorism Message-ID: Neurobiology of Religious Terrorism http://www.innerworlds.50megs.com/terrorism.htm [This is a dubious site. Links to other articles below.] Todd Murphy, Researching Behavioral Neuroscientist Understanding the mind of a suicidal terrorist is a special challenge in psychology. Not only do their actions show a highly aggressive personality, but their motivations seem to outweigh even the imperative for self-preservation. The profile of the suicide bomber is not at all simple. We think of them as maniacs; madmen driven by national and religious hatred, or a simplistic 'will to power'. Yet, should soldiers do the same thing fighting in a cause we support, we're quick to quote our own culture's holy books: "There can be no greater love than to lay down one's life..." The suicide himself is usually not forced to his actions. He is not held prisoner, nor forced at gunpoint to complete his mission. The Kamikaze of the second world war was motivated differently. They were told that they were defending their homes and families from imminent destruction by US forces, and that they had a chance to stop the American fleet from arriving in Japan. If they succeeded, their families would escape danger. For the contemporary Islamic terrorist, no such threat exists. Their failure will not mean the end of their way of life, nor the deaths of their families. So, how to account for their unprecedented dedication? Or their 'greatest love'? The answer lies in the unique psychology of the religious killer. Fortunately, a recent study addresses the issue: "I Would Kill in God's Name:" Role of Sex, Weekly Church Attendance, Report of Religious Experience, and Limbic Lability" M.A. Persinger, Perceptual And Motor Skills, 1997, 85, 128-130 [3]link The study is part of a larger research effort in the neurological bases of religious experience, including religious personalities, religious conversions, and now, extreme religious views. It was done by administering a set of questionnaires to 1480 university students that asked about a wide range of religious beliefs, habits and behaviors. It also asked about how often the subjects had more common, 'altered state' experiences, like deja vu, the sense of a presence, electric-like sensations, and many others. Taken together, these latter experiences (complex partial epileptic signs) give a measure of a person's "Limbic Lability". The statistical analysis involved taking each questionnaire that included a 'yes' response to an item that asked if they would be willing to kill for God. All the questionnaires that included a 'yes' to this were examined to see what other items emerged in association with a willingness to kill in 'His' name. Four factors emerged. 1) Having had a religious experience. 2) Weekly church attendance (religious orthodoxy). 3) Being Male. 4) Limbic lability (which will be explained). The next step was to look at all the questionnaires that showed all four traits, creating a second group. 44% of this second group stated that they would kill another person if God told them to. The study was based on university students, and if generalizable, then one out of 20 Canadian university students would be willing to kill another person if they were to attribute the instruction to God. Let's examine these factors, one-by-one. RELIGOUS EXPERIENCE In dismissing religious experience, both modern scientists and politicians miss a factor that offers a more powerful motivation than patriotism, national defense, greed, or military pride. The absolute conviction that firsthand experience creates. Religious experience (1). Religious experience can take many forms. In its most intense manifestations, it can involve seeing God, or hearing his voice. Out-of-body experiences, a lucid or exceptionally powerful dream, sensing the presence of an angel, and even moments of creative inspiration. It can also appear as an emotional peak that might happen during a religious meeting, or a political meeting with religious overtones. In the aftermath of a religious experience, the individual will almost inevitably interpret it terms of their religious or spiritual views(2). Whatever the idea the person uses most readily will lend a context to their epiphany. From then on, it acquires an ethos of absolute truth. It 'feels' like the truth. At present, many Islamic religious leaders are being quite vocal that violence and terrorism have nothing to do with Islam, and wonder openly at how terrorists reconcile their actions with the teachings of Islam, with it's constant rejoinders to compassion and mercy. The answer probably lies in the sense of certainty that a religious experience can create. They do not need to consider the teachings of the Koran. Instead, they 'feel' the world in terms of their religious 'awakening', a direct message from God; one that supersedes scripture. All Islamic teachings are seen through it's lens. When the idea that dominates their world is that of Jihad, then war and destruction are seen as a 'higher' form of compassion. Before we jump to the conclusion that we could never see any wisdom or compassion in being aggressive, we might recall how we support, in our minds, Jesus as he drove the money-changers from the temple of Jerusalem. That was an act of violence, guided by a higher wisdom. So, in Truman's mind, was the bombing of Hiroshima. The step from higher wisdom to violence is all too common. Anyone who has ever gone too far in punishing a child has made it. For the Jihadi, like anyone else, the full range of religious experiences is quite large. A person might hear a religious statement, and feel a parasthesia, a tingling or electric-like feeling, or more commonly, an intense burst of emotion. The statement, whatever it is, suddenly acquires tremendous force. The person having the experience is easily convinced that the statement is true. The powerful emotions experienced during a political rally, given a religious interpretation, make it's goal or theme into a religious truth. At least insofar as the person will then accept the message as part of their own faith. If it glorifies the individual; if it 'saves' them, or makes them feel that they are one of the "chosen few" or the "anointed" or that their service to God is special, the person can begin to identify themselves more as the one 'touched by god' than simply as a person. Once a person identifies with God, their own life can actually be seen as being in the way, a hindrance to unity with "Him". Suicide is sanctioned when it is believed to a part of God's plan, and the person who commits it in his name may seem to be spiritually elevated; a high and holy being. "A leading Islamist authority, Sheikh Yusuf al-Qaradawi, recently explained the distinction this way: attacks on enemies are not suicide operations but "heroic martyrdom operations". (3) In all probability, Jihad suicides are not motivated by a desire to elicit fear. Their military superiors certainly are, but the individual is much more likely to be motivated by the desire for an intangible, personal, reward. The desire to experience, once again, the peak moments of their lives. People who have had religious experiences will often do whatever it takes to re-capture those moments. Indeed, suicide in this context is not without precedent. One type of religious experience (far more intense than the type that can occur in moments of extreme religious fervor) is the now-well-known Near-Death Experience (NDE). There have been reports of suicides by people who have had deeply spiritual experiences while clinically dead, but were later revived. While the overwhelming majority of NDE experiences say that they would never consider suicide, they also often have a deep longing to return to the blissful state they had while they were having their NDEs. History records many examples of suicide for religious reasons. The Jim Jones cult, the mass-suicide of Jewish Zealots under siege by the Roman General Vespasian at Masada, the Vietnamese Patriarch who immolated himself in protest during the Vietnam War, Joan of Arc's refusal to recant her preaching, based on her visions, even though it meant being burned at the stake. The point is that religious experience can introduce priorities for a person that are quite 'larger than life.' So much so that death can lose it's repellent quality. It can even become quite exalted. Both the killer and the killed can seem trivial compared to God's larger plan. Revulsion at suicide in "His" service can seem almost self-indulgent. Islamic tradition records that having visions can help a Jihadi to fulfill his mission. The very word assassin derives from the Arabic word Hashishim, a medieval word that referred to a group of assassins who were offered hashish-induced visions as a preview of the heavenly rewards that awaited them after giving their lives in Jihad. In fact, the relationship between the leaders and the individual terrorists attackers is probably better compared to a Guru and his disciples than a general and their troops. The FBI has considerable valid material on the psychology of the 'surrendered disciple', gathered during their investigation of Bhagwan Shree Rajneesh prior to his departure from the US in 1985. Many trained psychotherapists were involved in his organization, and later repudiated him as their Guru. Important former disciples from this group may have useful insights to offer. Today's terrorists have probably replaced hashish with emotionally intense moments during rallies and prayer meetings. Both can have profound effects on the limbic system, and both can cement beliefs solidly into place. The suicidal Jihadi perceives themselves as making the supreme sacrifice for God. They believe they will enjoy special rewards in the hereafter, and, because of the architecture all human brains share, they have a sense that the fulfillment for the longing their peak experience created will happen at death. Having the sense of being on a mission from God in dying, they simply cannot imagine that they will not attain a "peace which passeth understanding". Further, terrorist orthodoxy would interpret the suicidal Jihadi as a traitor against god should he change his mind, once he had volunteered. Social pressures to follow through are enormous. The Jihadi who, in a moment of fervor or deep reflection, decides to give his life for the holy war, may have little choice but to follow through. Like the Japanese kamikaze who chose to pilot his plane to it's destination, the Jihadi remains on the course he is given. Such a Jihadi will either not have been exposed to alternative interpretations for their behavior, or will reject them out of hand due to their unorthodox origin. When confronted with vital decisions, the orthodox ally with orthodoxy. Because people who have had religious experience use them as the benchmark for their spirituality, rather than the scriptures or teachings, they will often be very reticent to share their experiences and missions with others. Most people who have had deep religious experiences feel that others just 'do not understand'. With phrases like 'there are none so deaf as those who will not hear', and "you shall not cast your pearls before swine", people tend to keep very quiet about the sense of destiny that their religious experience gives them. In more practical terms, they very likely prefer to avoid the challenges to their self identity that challenges to their ideology would create. A bit like a child who avoids showing off their writing in class for fear of ridicule. The need for security among religious terrorists can acquire a sacred connotation in this way. In relating to their fellow Moslems, they may see themselves as consecrated holy men, an 'inner circle', moving among a spiritless herd of sheep, if they relate to them at all. WEEKLY CHURCH ATTENDANCE The item in the study asked about weekly church attendance is worth looking at. The study was done in Canada, in a predominantly Christian community (Sudbury, Ontario). It would seem to evidence traditional, orthodox, or institutionalized beliefs. Going to church every week is, in this paper, interpreted as evidence of religious orthodoxy. Like the churchgoer, the Jihadi embraces a set of beliefs they share in common with a community, and they participate in the life of that community regularly. This now refers to their own, 'inner' circle, and not that of other Moslems. Orthodoxy in belief is reinforced by subtle social rewards from the community almost constantly. Leaders within the community are also the ones who demonstrate the deepest understanding of it's beliefs. It's heroes are the ones who do the most for the cause. Missionary work, charity, and religious practice are good sacrifices, but nothing, absolutely nothing, compares with martyrdom. This is where some beliefs peculiar to Islam come into play. Not only does it sanction war in the name of Allah, but it promises acceptance into heaven to those who die in these wars, called Jihads, and considers them to be martyrs. This allows suicides to see themselves as martyrs, as well. In fact, they probably consider themselves as the highest form of martyr. Although the term "Mohajadinn" is usually used for fighters in Jihads, we will use the term "Jihadi" here, so as to avoid accidental pejorative reference to legitimate Islamic freedom fighters, who appear regularly in recent history. GENDER One of the features of the religious killer is that they are usually, but not always, male. One exception to this has been the female suicide bomber in Sri Lanka who attempted to assassinate it's president, Chandrika Kumaratunga, in 1999. The male brain differs from the female brain in that it seems to be less "multitasking" (4). When a male is engaged in a task, fewer brain structures are activated, in most cases, than a female brain engaged in the same task. Males, therefore, are better able to maintain the orthodoxy required, and to exclude items from their thoughts (denial), like, for example, the long-term consequences of their actions. The dominant point of reference will always be the religious experience, and the framework of beliefs used to interpret them. The male brain seems better adapted to handle the 'single-pointedness' religious mania requires. TEMPORAL LOBE LABILITY Temporal lobe lability refers to a person's sensitivity to altered states of consciousness. I don't mean the dramatic ones, such as religious visions. I mean the more subtle ones, with phenomena like like deja vu, 'sensing a presence', pins and needles sensations, fleeting visions during twilight sleep, and other common episodes (5). These occur in a continuum across the human population (5, 27). Some are very sensitive, having these experiences very often, and others never have them at all. Not all people who have these experiences also have religious experiences, but almost everyone who has had a religious experience has (6, 26). In the body of research to date (7, 28), these experiences appear when two structures within the temporal lobes have their normal communications between one another disturbed. This can happen between the two hemispheres, or between two structures , within the limbic system (deep in the temporal lobes), or between a deep structure and the surface of the temporal lobes. In this model (vectorial hemisphericity and interhemispheric intrusions (8, 9), the event of the religious experience is likened to an extremely small epileptic event that stays in the temporal lobes of the brain. These are called "microseizures (10)". Like larger epileptic seizures, these experiences make lasting changes in the brain (11). The personality of the person, their 'sense of self', is forever changed as their limbic system now has a few pathways (matrices of neurons) 'burned in' in a process known as 'kindling'. Pathways that relate to the human sense of self. The limbic system plays a crucial role in the production of thought and emotion. It also deals in two rather subtle phenomena. Meaningfulness and contextualization (12). These direct our thoughts and feelings into recurring patterns unique to each individual, and with them, unique behavior patterns. The connotations of words are processed through their meanings and in the context in which we hear them. The sense that an experience "means something" or that words 'mean something more' than they say, are two examples of somewhat raw experiences of meaningfulness. If a thought feels meaningful, then it will try to find a context for itself. The more meaningful, the larger the context must be. God's will becomes larger, or more important, than life itself. A more salient point at this juncture is that the religious experience begs the largest possible context, even if only in one's thought's. The largest context, for Peoples of the Book, ( to coin a phrase) has always been God, and "His" whole world. In order to encompass their experience the person must make use of some rather exotic ideas. Ideas which are as far from their ordinary experience as their peak moments. For the Islamic terrorist, these are well known. Jihad, doing God's will, martyrdom, etc. Death. In 'His' name. GEOLOGY In recent years, it has emerged that the human psyche can be affected by seismic activity, or rather, seismic (or tectonic) strain (13, 14). The more earthquakes and tremors in a given area, the more often the earth's magnetic field changes, and that can have an impact on how our brains function. Our brains are sensitive to changes in the earth's magnetic field because they contain large numbers of organically-grown magnetite crystals. By coincidence or design, patterns seem to appear in the geomagnetic field, and these seem to have some overlap with the magnetic signals that are created when our brains are engaged in normal electrical firing. Specifically, in the limbic system. Exposure to the earth's magnetic field is 'chronic', meaning constant and long-term. Chronic exposure to changing magnetic fields could make the populations of seismically-active areas demonstrate a higher-than-average limbic lability. In other words, areas with a lot of earthquakes and tremors are more likely to produce a population willing to kill in God's name, all other conditions being equal. According to the Israeli Seismic Network's "Galilee" data set, Israel had 28 earthquakes in one three-year period. (1987 to 1991). That's about one every five weeks. (15) One possibility that cannot be discounted presently is that the Israeli / Palestinian territories might produce populations with either higher than normal limbic lability, or that their limbic phenomena might show specific patterns whose behavioral correlates with aggressive thoughts (ideation) and behavior. Although so far, there have been no statistical studies of temporal-lobe-based behaviors in seismically active areas, the notion has been considered with California's southern lake county designated as a good area to carry out such a study. Eventual studies in this field may allow a meaningful measure of the aggressiveness of in seismically-active areas, and with that, an estimate of the size of the population within the Palestinian minority willing to go to the furthest extremes, "in God's name". There are three cultures best known for the glorification of suicide, the Arabic-speaking terrorist sub-culture, the Aztecs, who exalted the act of volunteering as a human sacrifice, and medieval Japan, where suicide reached the status of a cult behavior within the Shinto religion, complete with it's own ceremony. All of these areas are subject to frequent seismic activity. FURTHER CONSIDERATIONS. 1) Given a sufficiently large population, large numbers of individuals who fit the criteria (about 1 in 20, based on the Canadian data) should be readily available. When the data is corrected for local seismically, we should find that the number increases noticeably. Any estimates based on this data should be reduced substantially in recognition that suicide is a less-probable behavior, all other conditions being equal, than homicide. However, the relative incidence of each behavior in normal populations may not necessarily provide meaningful estimates. Suicide is prohibited in normal religious belief, but sanctioned, even suggested, for the Jihadi. 2) Jihadi who have had a religious experience and fit the other criteria should be much more willing to volunteer than others, due to their personal sense of destiny, which martyrdom will appear to fulfill. 3) The more successful attacks this group performs, the more willing their volunteers will be, as they see their predecessors attaining the highest spiritual promotion, while they themselves only stand and wait. In short, the territorial and ideological conditions in the Middle East may favor the production of populations willing to kill for God. These considerations may help us to see that terrorists are not entirely the products of hate-mongers, and that they may not beyond help and reform. End _________________________________________________________________ REFERENCES (1) Persinger, Michael A. ", Neuropsychological Bases of God Beliefs", Praeger, 1987 (2) Persinger, M.A., ";Vectorial Cerebral Hemisphericity as Differerential Sources for The Sensed Presence, Mystical Experiences and Religious Conversions"; Perceptual and Motor Skills, 1993, 76, 915-930 3) The Jerusalem Post July 27, 2001 http://www.danielpipes.org/articles/20010727.shtml 4) Moir, Anne, Ph.D. & Jessel, David "Brain Sex: The Real Difference between Men & Woman" Laurel Publications, 1991 5) Persinger, Michael A.,& Makarec, Katherine "Complex partial Signs As A continum From Normals To Epileptics: Normative Data And Clinical Populations" Journal Of Clinical Psychology, Jan 1993, 49 (1) 33-37 6) Persinger, M.A., "People Who report Religious Experiences May Also Display Enhanced Temporal-Lobe Signs". Perceptual and Motor Skills,, 1984, 58, 963-975 7) Persinger, M.A. "Religious and Mystical Experiences as Artifacts of Temporal Lobe Function: A General Hypothesis.", Perceptual and Motor Skills 1983, 57, 1255-1262 8) Persinger, M.A. ";Enhanced Incidence of the ";Sensed Presence"; in People Who have learned to Meditate: Support for the Right Hemispheric Intrusion Hypothesis"; Perceptual and Motor Skills, 1992, 75, 1308-1310 9) Persinger, Michael A. Bureau, Yves, R.J. Peredery, Oksana, P., Richards, Pauline M. ";The sensed Presence as Right Hemispheric Intrusions into the Left Hemispheric Awareness of self: An Illustrative Case Study."; Perceptual And Motor Skills, 1994, 78, 999-1009 10) Persinger, Michael A. "Striking EEG Profiles From Single Episodes of Glossolalia and Transcendental Meditation" Perceptual & Motor Skills. 1984, 58, 127-133 11) Persinger, Michael A., "Near-Death Experiences: Determining the Neuroanatomical Pathways by Experiential Patterns, and Simulation In Experimental Settings."; Appeared in "Healing: Beyond Suffering or Death." Ministry of Mental Health Publications, Quebec, Canada, 1994. 12) Miller, Robert, "Cortico-Hippocampal Interplay And The Representation of Contexts in the Human Brain" Springer-Verag, 1991 13) M.A. Persinger, Out-of Body -Like experiences are More Probable in People With Elevated Complex Partial Epileptic-Like Signs During Periods of Enhanced Geomagnetic Activity: A Nonlinear Effect' Perceptual and Motor Skills, 1995, 80, 563-569 14) Conesa, Jorge, "Isolated Sleep paralysis, vivid dreams, and geomagnetic influence: II Perceptual And Motor Skills, 1997, 85, 579-584 15) [4]http://es1.multimax.com/~gtdb/galilee/eq.html [7]Shakti - Magnetic Brain Stimulation [8]Deja Vu [9]Darwinian Reincarnation [10]Consciousness [11]Romantic Love and the Brain [12]Origins of spirituality in Human Evolution [13]Sacred Lands [14]"The Sensed Presence" [15]Glasses For Enhanced Visual Acuity [16]God in the Brain [17]Spiritual Aptitude Test [18]Stimulating My BrainAs A Spiritual Path [19]Inventing Shakti [20]Sex_and States of Consciousness [21]The Gay Male Brain - Evolutionary Speculations [22]Visions [23]The Spiritual Personality [24]Enlightenment And the Brain [25]Archetypes [26]A Diet For Epileptics? [27]Odd Experiences - Online Poll Results [28]Brain_News [29]Out-Of-Body Experiences [30]Near-Death Experiences - Thai Case histories [31]The Big Bang [32]Meditations from Brain Science [33]Near-Death Experiences in Thailand - Discussion [34]Downloads [35]The Terrorist Brain [36]Publications by Dr. M.A. Persinger [37]Credentials [38]Hippocrates on Epilepsy References 2. mailto:brainsci at jps.net 3. http://tinyurl.com/4b3y9 4. http://es1.multimax.com/~gtdb/galilee/eq.html 7. http://www.innerworlds.50megs.com/winshakti/index.htm 8. http://www.innerworlds.50megs.com/dejavu.htm 9. http://www.innerworlds.50megs.com/rebirth.htm 10. http://www.innerworlds.50megs.com/consciousness.htm 11. http://www.innerworlds.50megs.com/romance.htm 12. http://www.innerworlds.50megs.com/deathanxiety.htm 13. http://www.innerworlds.50megs.com/earthfee.htm 14. http://www.innerworlds.50megs.com/sp.htm 15. http://www.innerworlds.50megs.com/evaglasses.htm 16. http://www.innerworlds.50megs.com/god.htm 17. http://www.innerworlds.50megs.com/anchored_TL_test.htm 18. http://www.innerworlds.50megs.com/me_myTL.htm 19. http://www.innerworlds.50megs.com/neuromag.htm 20. http://www.innerworlds.50megs.com/sex_ascs.htm 21. http://www.innerworlds.50megs.com/gaybrain.htm 22. http://www.innerworlds.50megs.com/Visions.htm 23. http://www.innerworlds.50megs.com/traits.htm 24. http://www.innerworlds.50megs.com/moksha.htm 25. http://www.innerworlds.50megs.com/archetypes.htm 26. http://www.innerworlds.50megs.com/ketogenic.htm 27. http://www.innerworlds.50megs.com/pollresults.htm 28. http://www.innerworlds.50megs.com/brain_news.htm 29. http://www.innerworlds.50megs.com/obe.htm 30. http://www.innerworlds.50megs.com/bkknde.htm 31. http://www.innerworlds.50megs.com/bigbang.htm 32. http://www.innerworlds.50megs.com/neuromed.htm 33. http://www.innerworlds.50megs.com/thaindes.htm 34. http://www.innerworlds.50megs.com/downloads.htm 35. http://www.innerworlds.50megs.com/terrorism.htm 36. http://www.innerworlds.50megs.com/Persinger_pubs.htm 37. http://www.innerworlds.50megs.com/credentials.htm 38. http://www.innerworlds.50megs.com/hippocrat.htm From checker at panix.com Mon Jul 4 01:27:33 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 21:27:33 -0400 (EDT) Subject: [Paleopsych] Inner Worlds: Origins of spirituality in human evolution: what happened when our species learned of our own mortality. Message-ID: Origins of spirituality in human evolution: what happened when our species learned of our own mortality. http://www.innerworlds.50megs.com/deathanxiety.htm [Again, this is a dubious site.] The Beginnings of Spirituality and Death Anxiety in Human Evolution. [2]brainsci at jps.net Mommy, mommy, I feel sick. Run for the doctor, quick, quick, quick. Doctor, doctor, will I die? Yes my dear, and so shall I. (Whitley Strieber) Human spirituality had an origin in our history. It began soon after we acquired our language skills, and is related to the linguistic aspects of our sense of self. If we didn't have language, it would have been very easy to go into a total denial of the fact of personal death(1). Nobody has ever experienced their own death. You have to figure it out while you're still alive. How do you know you'll die? Unless you have some fairly intense psychic powers ( and you believe in reincarnation), you won't remember dying, and even then your memories will bear other interpretations. Most people, most of the time, only know that they will die because they've learned it, usually during childhood. "Does everybody have to die, Daddy?" "Do dogs go to Heaven?" "Can people in Heaven see us?" "Is it a long time?" As children grow up, they experience the deaths of those around them, and learn that people actually die. Their religions tell them about life after death, making sure that kids think about it in their own terms as soon as they learn to think about it at all. Tales are told. Death is heaven and hell. Death is rebirth. Death is where the ancestors are. Death is a lush spirit world. Death is being in the arms of God. Cultures and religions have co-opted death, turning it into a story written by living cultures, for living people. Near-Death studies have found that experiences very much like traditional afterlife stories (or something like them) can actually be found in near-death accounts. This can explain the source of these stories, but this chapter is more about why humans have the need for these stories in the first place. When we first appeared as a species, our brains expanded in two important areas. The frontal lobes, which have to do with planning, anticipating things, and projecting into the future, and the temporal lobes, which have to do with memory. Both of these large areas have many other functions, but these two stand out when we are talking about understanding death. The temporal lobes expanded, and now included language comprehension areas, and the frontal lobes grew to include language production areas. The human sense of self changed to include a component that dealt in language, so that we began to take words personally, and to feel our selves' affected by what others say to us. Our minds were re-shaped with a new 'top priority': talking to others. Each person had to fit the way they related to others into a vocabulary they shared with others. The process of actually identifying with others was probably enhanced as well. We were more able to assume that our experiences were like those of others, and that their experiences were also like ours, because they used the same words and gestures we do. This must have enhanced our capacity for bonding, but it also introduced a defense mechanism that helps people to feel that anyone who seems to experience the world differently than themselves is somehow less than fully human. Other nations were thought of as though they were other species. We began to judge others. Not just dislike them, but actually entertain thoughts that they shouldn't be the way they are. At this point in our evolutionary history, a fundamentally new experience became possible. A person could look at a dead body, remember the experience, think about it, personalize the whole thing, and conclude that the same thing is going to happen to them. Language skills are utilized, and the sentence appears in the mind: "I will die." The conclusion is reached without the person having any first hand experience at all. The concept is very threatening. Our new cognitive skills would allow a lot more imagination than before, and it would have been very adaptive for us to use this skill to imagine as many way of dying as possible. The more ways of dying we can imagine, the more ways we can avoid. But death anxiety is very stressful. If we were aware of our death at all times, we would be at risk for several psychoses, like the ones that follow the development of the normally fear-laden temporal lobe seizures. (2, 3). Persinger (4) has theorized that we developed a mechanism that shuts death anxiety off. Spiritual experience. You have to know something about how the brain creates emotion before you can understand how this works. It starts with a structure called the amygdala. Actually, there's two of them, one on each side of the brain. The one on the right is specialized for negative feelings, especially fear and sadness. The one on the left manages positive feelings. There's an idea that keeps re-appearing in my work. That when a negative emotion becomes intense enough, it can actually create bliss. Here's how it works: As a negative emotion, especially fear, deepens, it involves more and more of the right amygdala. The source of the emotion stimulates it from within. When a certain point is reached, it 'overloads', and the activity spills into the amygdala on the left. All of a sudden, the left amygdala, which has been operating at a low level, is filled with activity, and the person is filled with bliss, joy, ecstasy, and a sense of meaningfulness. The point where this happens is very deep in the experience of fear or sadness. My interpretation of these events is that they're a rare example of a state of consciousness that's usually a part of the death process (5). Because these states are ordinarily reserved for the end of life, they might manifest only when a person only feels that their lives, their self' is threatened with extinction. When that threshold is crossed, a spiritual experience can occur, one that takes a part of the death process, and uses it to end a painful episode. Many near-death phenomena have appeared at times that a person only thought they were about to die, even when they weren't in any danger at all, as though the belief that one is about to die is as much of a trigger as death itself. There are many recorded accounts of near-death-like experiences happening because of threats to the sense of self without any threats to the person's life. Here's one such case (4) : "When Fred died, the world collapsed around me. I could not eat or sleep, everything seemed to lose its color-food was tasteless, I couldn't swallow because of this lump in my throat; it would not go away no matter how much I cried. My mental pain would come and go like chill waves. Sometimes I would forget for a few minutes and think it was all a bad dream. Other times, the reality of it would hit me like a cold shower. The fourth night after he died, I lay in bed, trying to piece my life together. I lay there for hours. Suddenly, I felt Fred's presence beside me in the bed. I looked over and saw him standing beside me. He was dressed in his old work clothes and had a big smile on his face. He said "Don't worry Maud. I'm in heaven now, God has let me come to you. All our friends are here too. Its all true, what we believed about God ...this is only a temporary separation." I went to sleep and didn't wake for hours. The next day I felt good, the sun was shining again; there was meaning to my life." Maud probably identified herself as Fred's wife. When he died, she died. Her sense of herself, that is. Her brains activity can be guessed at: when her grief passed a certain point, her left amygdala was triggered, and its positive contribution to her sense of self was restored. When objects identified with the self are lost, so is the self. In fact, one study found that the most prominent predisposing factor in sensing the presence of a deceased spouse was that their wife or husband had died unexpectedly (8). Without time to prepare themselves mentally, they weren't able to resist their own grief, and the threshold was passed. The human sense of self is partly a social thing. If a person experiences too much rejection at the hands of others, as in child abuse, their self-esteem can be lowered below a certain point, also triggering this process. There are several studies on child abuse that support this idea (6). As the cycle of abuse proceeds, dissociative states that first appear as ways to escape from the abuse can become permanent options, traits.' (7) The following case (author's collection-paraphrased) illustrates the point: "As a child, I severely abused in every way a child can be. I grew up never having even one toy. I would be locked in a closet for days at a time. I spent my whole childhood wanting to die. He (her father) wouldn't give me any food or water. I lost all sense of time in there. I felt myself falling into a space I came to think of as the pit of despair'. Eventually, I came to the bottom. There, I found angels waiting for me. They held me and comforted me and told me how my I was being prepared for something important that would come later on in my life. They promised me that they would never leave me and that they would always protect me. Now, when I do massage, these same angels appear and give me spirit guidance. They helped me to become a healer, and I can't imagine anything I'd rather be. I can't say that I'm glad I was abused, but having been abused is a part of my life, and I like my life. Now." There are several points that both these stories have in common with near-death experiences, such as the angels and meeting a dead person. It seems as though they both felt they were dying, and used the mechanism for healing these feelings that appears in the death process. A lot of my work is devote to exploring the idea that mystic experiences are instances of the death process occurring outside of their normal context. I don't see these as pathological instances of the experiences. It seems more likely that our continually evolving minds found additional applications of the new neural mechanisms associated with the death process, and that is the source of human spirituality. In another article ([3]This one), we have looked at the similarities between romantic love and religious devotion. Love, however its defined, has a powerful ability to lessen (attenuate) death anxiety. The death-process, as revealed in near-death experiences, seems to return, over and over, to the experience of love and being loved, of being reunited with loved ones, and of looking at life in terms of how much love we creat for ourselves while we live. Both love and the experience of religious bliss lessen the anxiety that threats to the sense of self create. Love from others heals wounds to the self, which is partly a social thing. Religious bliss and ecstasy heal threats to the more privately felt self. The death process begins with the fear and resistance that helps us to try to survive, but once death begins, and survival becomes impossible, the fear that expectating death creates is replaced by a feeling that's every bit as good as the fear of death was bad. Our species is the only one that can hold the thought in our minds: "I will die.' Ours is the only one that needs a way to cope with it, and its long-term effects may be among the most important factors that shaped our cultures. As children, we might run to our mommies when we hear things that hurt our feelings. As adults, we run to God when our feelings are hurt. The fact of death is understood as an idea first, so its natural salve is more ideas . Ideas like the ones that religion uses to assure people that death, somehow, does not really exist. Any idea will do so long as it makes it possible to face death without anxiety. A story comes to mind. When I was in India, I was walking in a main street in Jaipur. I came on a small crowd gathered on one side of the street, and went up to see what it was about. When I got in, I saw that they were staring at a very pale man lying on the ground, wearing only a loincloth. He was covered with a loosely woven cheese cloth. Next to him was a battered aluminum bowl with some money in it. I noticed a cheerful-looking man standing next to him wearing the khaki shirt and brass insignia of a government worker. I went up to him and asked: "What are you doing?" You can ask that sort of thing in India. He said: "I am collecting money for this poor fellow." "I work in municipalities office." I asked: "what's the matter with him?" He answered: "Nothing is matter with him. He is just dead. We need money to burn his body. You put money. Very good for you." This man wore the Hindu tilak that advertised his beliefs about death and dying. I looked at the man, now knowing that it was a corpse. As if he could read my mind, the agent said: "Very soon he is child again. Nothing Worry." I put down 20 Rupees. The man's comments illustrate the ease with which the mind can remove the threat of death, and turn it into something trivial. "He is just dead. ... Nothing Worry." Just a few hours before the present, I was in the grocery store. While I waited in line , an old man got in the line behind me, and quite out of the blue, he said "I'm 83 years old. Last week, I knew I was going to die soon, but I don't care. I tried to tell my son, but he didn't wanna listen." I told him, thinking of Near-Death Experiences, that I'd heard that the after life was usually a pretty good deal. He said "Oh...I don't believe in any of that horseshit." The man seemed very happy. I asked him about it, and he said "Oh ...I smile all the time now." Feeling the approach of death to be certain, whether real or imagined, natural or not, expected or not, can initiate experiences that commonly occur at death. Threats to the more subtle sense of self are handled differently, in more social ways, like receiving comfort from others, but the brain structures involved seem to be much the same. We use language to amplify the fear of death and that creates a deeper need to avoid it than any other species has. Our thoughts of death gave us a reason to want to be immortal. Our death-process, which continues our consciousness after death for a time makes it possible for us to feel its true while we're still alive. And feel that we are safe from dying. References (1) Persinger, M.A. "Death Anxiety as a Semantic Conditioned Suppression Paradigm" Perceptual and Motor Skills 1985, 60, 827-830 (2) Slater, E, & Beard, E.W., "Schizophrenia-like psychoses of Epilepsy" British Journal of Psychiatry, 1963, 109 95-150 (3) Umbricht, Daniel, Et Al, "Postictal and Chronic Psychoses in Patients with Temporal Lobe Epilepsy" American Journal of Psychiatry, 1995, 152:2 224-231 (4) Persinger, Michael A., "Neuropsychological Bases of God-Beliefs", Praeger, 1987 (5) Murphy, Todd, "[6]The Structure and Function of Near-Death Experiences: An algorithmic reincarnation hypothesis" Journal of Near-Death Studies (in press) (6) Hunt, Harry, Et. Al, "Transpersonal Effects in Childhood: An Exploratory Empirical Study of Selected Adult Groups" Perceptual and Motor Skills, 1992, 75, 1135-1153 (7) Perry, Bruce, Et, Al "Childhood Trauma, The Neurobiology of Adaptation, and Use-dependent" Development of the Brain: How "States" become"Traits," Infant Mental Health Journal, Vol 16, No. 4, Winter 1995 271-291 (8) Simon-Buller, Sherry, M.S. "Correlates of Sensing the Presence of a deceased Spouse" Omega, Vol 19(1) 1988-89 [9]Shakti - Magnetic Brain Stimulation [10]Deja Vu [11]Darwinian Reincarnation [12]Shakti LITE [13]Consciousness [14]Romantic Love and the Brain [15]Origins of spirituality in Human Evolution [16]Sacred Lands [17]"The Sensed Presence" [18]Glasses For Enhanced Visual Acuity [19]God in the Brain [20]Spiritual Aptitude Test [21]Stimulating My BrainAs A Spiritual Path [22]Inventing Shakti [23]Sex_and States of Consciousness [24]The Gay Male Brain - Evolutionary Speculations [25]Visions [26]The Spiritual Personality [27]Enlightenment And the Brain [28]Archetypes [29]A Diet For Epileptics? [30]Odd Experiences - Online Poll Results [31]Brain_News [32]Out-Of-Body Experiences [33]Near-Death Experiences - Thai Case histories [34]The Big Bang [35]Meditations from Brain Science [36]Near-Death Experiences in Thailand - Discussion [37]Downloads [38]The Terrorist Brain [39]Publications by Dr. M.A. Persinger [40]Credentials [41]Hippocrates on Epilepsy References 1. http://www.jps.net/brainsci/ 2. mailto:brainsci at jp.net 3. http://www.jps.net/brainsci/romance.htm 4. mailto:brainsci at jps.net 5. http://www.innerworlds.50megs.com/ 6. http://www.jps.net/brainsci/rebirth.htm 7. http://www.innerworlds.50megs.com/index.htm 8. mailto:brainsci at jps.net 9. http://www.innerworlds.50megs.com/shakti/index.htm 10. http://www.innerworlds.50megs.com/dejavu.htm 11. http://www.innerworlds.50megs.com/rebirth.htm 12. http://www.innerworlds.50megs.com/shakti_lite/index.htm 13. http://www.innerworlds.50megs.com/consciousness.htm 14. http://www.innerworlds.50megs.com/romance.htm 15. http://www.innerworlds.50megs.com/deathanxiety.htm 16. http://www.innerworlds.50megs.com/earthfee.htm 17. http://www.innerworlds.50megs.com/sp.htm 18. http://www.innerworlds.50megs.com/evaglasses.htm 19. http://www.innerworlds.50megs.com/god.htm 20. http://www.innerworlds.50megs.com/anchored_TL_test.htm 21. http://www.innerworlds.50megs.com/me_myTL.htm 22. http://www.innerworlds.50megs.com/neuromag.htm 23. http://www.innerworlds.50megs.com/sex_ascs.htm 24. http://www.innerworlds.50megs.com/gaybrain.htm 25. http://www.innerworlds.50megs.com/Visions.htm 26. http://www.innerworlds.50megs.com/traits.htm 27. http://www.innerworlds.50megs.com/moksha.htm 28. http://www.innerworlds.50megs.com/archetypes.htm 29. http://www.innerworlds.50megs.com/ketogenic.htm 30. http://www.innerworlds.50megs.com/pollresults.htm 31. http://www.innerworlds.50megs.com/brain_news.htm 32. http://www.innerworlds.50megs.com/obe.htm 33. http://www.innerworlds.50megs.com/bkknde.htm 34. http://www.innerworlds.50megs.com/bigbang.htm 35. http://www.innerworlds.50megs.com/neuromed.htm 36. http://www.innerworlds.50megs.com/thaindes.htm 37. http://www.innerworlds.50megs.com/downloads.htm 38. http://www.innerworlds.50megs.com/terrorism.htm 39. http://www.innerworlds.50megs.com/Persinger_pubs.htm 40. http://www.innerworlds.50megs.com/credentials.htm 41. http://www.innerworlds.50megs.com/hippocrat.htm From checker at panix.com Mon Jul 4 01:28:26 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 21:28:26 -0400 (EDT) Subject: [Paleopsych] NS: Entering a dark age of innovation Message-ID: Entering a dark age of innovation http://www.newscientist.com/article.ns?id=dn7616&print=true * 14:00 02 July 2005 * Robert Adler SURFING the web and making free internet phone calls on your Wi-Fi laptop, listening to your iPod on the way home, it often seems that, technologically speaking, we are enjoying a golden age. Human inventiveness is so finely honed, and the globalised technology industries so productive, that there appears to be an invention to cater for every modern whim. But according to a new analysis, this view couldn't be more wrong: far from being in technological nirvana, we are fast approaching a new dark age. That, at least, is the conclusion of Jonathan Huebner, a physicist working at the Pentagon's Naval Air Warfare Center in China Lake, California. He says the rate of technological innovation reached a peak a century ago and has been declining ever since. And like the lookout on the Titanic who spotted the fateful iceberg, Huebner sees the end of innovation looming dead ahead. His study will be published in Technological Forecasting and Social Change. It's an unfashionable view. Most futurologists say technology is developing at exponential rates. Moore's law, for example, foresaw chip densities (for which read speed and memory capacity) doubling every 18 months. And the chip makers have lived up to its predictions. Building on this, the less well-known Kurzweil's law says that these faster, smarter chips are leading to even faster growth in the power of computers. Developments in genome sequencing and nanoscale machinery are racing ahead too, and internet connectivity and telecommunications bandwith are growing even faster than computer power, catalysing still further waves of innovation. But Huebner is confident of his facts. He has long been struck by the fact that promised advances were not appearing as quickly as predicted. "I wondered if there was a reason for this," he says. "Perhaps there is a limit to what technology can achieve." In an effort to find out, he plotted major innovations and scientific advances over time compared to world population, using the 7200 key innovations listed in a recently published book, The History of Science and Technology (Houghton Mifflin, 2004). The results surprised him. Rather than growing exponentially, or even keeping pace with population growth, they peaked in 1873 and have been declining ever since (see Graphs). Next, he examined the number of patents granted in the US from 1790 to the present. When he plotted the number of US patents granted per decade divided by the country's population, he found the graph peaked in 1915. The period between 1873 and 1915 was certainly an innovative one. For instance, it included the major patent-producing years of America's greatest inventor, Thomas Edison (1847-1931). Edison patented more than 1000 inventions, including the incandescent bulb, electricity generation and distribution grids, movie cameras and the phonograph. Medieval future Huebner draws some stark lessons from his analysis. The global rate of innovation today, which is running at seven "important technological developments" per billion people per year, matches the rate in 1600. Despite far higher standards of education and massive R&D funding "it is more difficult now for people to develop new technology", Huebner says. Extrapolating Huebner's global innovation curve just two decades into the future, the innovation rate plummets to medieval levels. "We are approaching the 'dark ages point', when the rate of innovation is the same as it was during the Dark Ages," Huebner says. "We'll reach that in 2024." But today's much larger population means that the number of innovations per year will still be far higher than in medieval times. "I'm certainly not predicting that the dark ages will reoccur in 2024, if at all," he says. Nevertheless, the point at which an extrapolation of his global innovation curve hits zero suggests we have already made 85 per cent of the technologies that are economically feasible. But why does he think this has happened? He likens the way technologies develop to a tree. "You have the trunk and major branches, covering major fields like transportation or the generation of energy," he says. "Right now we are filling out the minor branches and twigs and leaves. The major question is, are there any major branches left to discover? My feeling is we've discovered most of the major branches on the tree of technology." But artificial intelligence expert Ray Kurzweil - who formulated the aforementioned law - thinks Huebner has got it all wrong. "He uses an arbitrary list of about 7000 events that have no basis as a measure of innovation. If one uses arbitrary measures, the results will not be meaningful." Eric Drexler, who dreamed up some of the key ideas underlying nanotechnology, agrees. "A more direct and detailed way to quantify technology history is to track various capabilities, such as speed of transport, data-channel bandwidth, cost of computation," he says. "Some have followed exponential trends, some have not." Drexler says nanotechnology alone will smash the barriers Huebner foresees, never mind other branches of technology. It's only a matter of time, he says, before nanoengineers will surpass what cells do, making possible atom-by-atom desktop manufacturing. "Although this result will require many years of research and development, no physical or economic obstacle blocks its achievement," he says. "The resulting advances seem well above the curve that Dr Huebner projects." At the Acceleration Studies Foundation, a non-profit think tank in San Pedro, California, John Smart examines why technological change is progressing so fast. Looking at the growth of nanotechnology and artificial intelligence, Smart agrees with Kurzweil that we are rocketing toward a technological "singularity" - a point sometime between 2040 and 2080 where change is so blindingly fast that we just can't predict where it will go. Smart also accepts Huebner's findings, but with a reservation. Innovation may seem to be slowing even as its real pace accelerates, he says, because it's slipping from human hands and so fading from human view. More and more, he says, progress takes place "under the hood" in the form of abstract computing processes. Huebner's analysis misses this entirely. Take a modern car. "Think of the amount of computation - design, supply chain and process automation - that went into building it," Smart says. "Computations have become so incremental and abstract that we no longer see them as innovations. People are heading for a comfortable cocoon where the machines are doing the work and the innovating," he says. "But we're not measuring that very well." Huebner disagrees. "It doesn't matter if it is humans or machines that are the source of innovation. If it isn't noticeable to the people who chronicle technological history then it is probably a minor event." A middle path between Huebner's warning of an imminent end to tech progress, and Kurzweil and Smart's equally imminent encounter with a silicon singularity, has been staked out by Ted Modis, a Swiss physicist and futurologist. Modis agrees with Huebner that an exponential rate of change cannot be sustained and his findings, like Huebner's, suggest that technological change will not increase forever. But rather than expecting innovation to plummet, Modis foresees a long, slow decline that mirrors technology's climb. At the peak "I see the world being presently at the peak of its rate of change and that there is ahead of us as much change as there is behind us," Modis says. "I don't subscribe to the continually exponential rate of growth, nor to an imminent drying up of innovation." So who is right? The high-tech gurus who predict exponentially increasing change up to and through a blinding event horizon? Huebner, who foresees a looming collision with technology's limits? Or Modis, who expects a long, slow decline? The impasse has parallels with cosmology during much of the 20th century, when theorists debated endlessly whether the universe would keep expanding, creep toward a steady state, or collapse. It took new and better measurements to break the log jam, leading to the surprising discovery that the rate of expansion is actually accelerating. Perhaps it is significant that all the mutually exclusive techno-projections focus on exponential technological growth. Innovation theorist Ilkka Tuomi at the Institute for Prospective Technological Studies in Seville, Spain, says: "Exponential growth is very uncommon in the real world. It usually ends when it starts to matter." And it looks like it is starting to matter. Related Articles Taking a trip down memory-chip lane http://www.newscientist.com/article.ns?id=dn7536 19 June 2005 Whatever happened to machines that think? http://www.newscientist.com/article.ns?id=mg18624961.700 23 April 2005 Developing countries work around the 'technology divide' http://www.newscientist.com/article.ns?id=mg18524826.300 15 January 2005 Weblinks Naval Air Warfare Centre http://www.nawcad.navy.mil/ Technological Forecasting and Social Change http://www.sciencedirect.com/science/journal/00401625 Ray Kurzweil http://www.kurzweilai.net/ Eric Drexler http://www.foresight.org/FI/Drexler.html Acceleration Studies Foundation http://www.accelerating.org/ Institute for Prospective Technological Studies http://www.jrc.es/home/index.htm E-mail me if you have problems getting the referenced articles. From shovland at mindspring.com Mon Jul 4 21:35:11 2005 From: shovland at mindspring.com (Steve Hovland) Date: Mon, 4 Jul 2005 14:35:11 -0700 Subject: [Paleopsych] Global Brain on Overdrive Message-ID: <01C580A5.967004D0.shovland@mindspring.com> These days when the President speaks we start talking back almost as soon as he starts. Emails, phone calls, faxes, and blogs begin to communicate our response to those who are listening on the other end, and these days they are listening. Then they adjust, if they can. Steve Hovland www.stevehovland.net From checker at panix.com Wed Jul 6 00:28:49 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:28:49 -0400 (EDT) Subject: [Paleopsych] Guardian: Where belief is born Message-ID: Where belief is born http://www.guardian.co.uk/print/0,3858,5226946-111414,00.html Scientists have begun to look in a different way at how the brain creates the convictions that mould our relationships and inform our behaviour. Alok Jha reports Thursday June 30, 2005 Belief can make people do the strangest things. At one level, it provides a moral framework, sets preferences and steers relationships. On another, it can be devastating. Belief can manifest itself as prejudice or persuade someone to blow up themselves and others in the name of a political cause. "Belief has been a most powerful component of human nature that has somewhat been neglected," says Peter Halligan, a psychologist at Cardiff University. "But it has been capitalised on by marketing agents, politics and religion for the best part of two millennia." That is changing. Once the preserve of philosophers alone, belief is quickly becoming the subject of choice for many psychologists and neuroscientists. Their goal is to create a neurological model of how beliefs are formed, how they affect people and what can manipulate them. And the latest steps in the research might just help to understand a little more about why the world is so fraught with political and social tension. Matthew Lieberman, a psychologist at the University of California, recently showed how beliefs help people's brains categorise others and view objects as good or bad, largely unconsciously. He demonstrated that beliefs (in this case prejudice or fear) are most likely to be learned from the prevailing culture. When Lieberman showed a group of people photographs of expressionless black faces, he was surprised to find that the amygdala - the brain's panic button - was triggered in almost two-thirds of cases. There was no difference in the response between black and white people. The amygdala is responsible for the body's fight or flight response, setting off a chain of biological changes that prepare the body to respond to danger well before the brain is conscious of any threat. Lieberman suggests that people are likely to pick up on stereotypes, regardless of whether their family or community agrees with them. The work, published last month in Nature Neuroscience, is the latest in a rapidly growing field of research called "social neuroscience", a wide arena which draws together psychologists, neuroscientists and anthropologists all studying the neural basis for the social interaction between humans. Traditionally, cognitive neuroscientists focused on scanning the brains of people doing specific tasks such as eating or listening to music, while social psychologists and social scientists concentrated on groups of people and the interactions between them. To understand how the brain makes sense of the world, it was inevitable that these two groups would have to get together. "In the West, most of our physical needs are provided for. We have a level of luxury and civilisation that is pretty much unparalleled," says Kathleen Taylor, a neuroscientist at Oxford University. "That leaves us with a lot more leisure and more space in our heads for thinking." Beliefs and ideas therefore become our currency, says Taylor. Society is no longer a question of simple survival; it is about choice of companions and views, pressures, ideas, options and preferences. "It is quite an exciting development but for people outside the field, a very obvious one," says Halligan. Understanding belief is not a trivial task, even for the seemingly simplest of human interactions. Take a conversation between two people. When one talks, the other's brain is processing information through their auditory system at a phenomenal rate. That person's beliefs act as filters for the deluge of sensory information and guide the brain's response. Lieberman's recent work echoed parts of earlier research by Joel Winston of the University of London's Wellcome Department of Imaging Neuroscience. Winston found that when he presented people with pictures of faces and asked them to rate the trustworthiness of each, the amygdalas showed a greater response to pictures of people who were specifically chosen to represent untrustworthiness. And it did not matter what each person actually said about the pictured faces. "Even people who believe to their core that they do not have prejudices may still have negative associations that are not conscious," says Lieberman. Beliefs also provide stability. When a new piece of sensory information comes in, it is assessed against these knowledge units before the brain works out whether or not it should be incorporated. People do it when they test the credibility of a politician or hear about a paranormal event. Physically speaking, then, how does a belief exist in the brain? "My own position is to think of beliefs and memories as very similar," says Taylor. Memories are formed in the brain as networks of neurons that fire when stimulated by an event. The more times the network is employed, the more it fires and the stronger the memory becomes. Halligan says that belief takes the concept of memory a step further. "A belief is a mental architecture of how we interpret the world," he says. "We have lots of fluid things moving by - perceptions and so forth - but at the level of who our friends are and so on, those things are consolidated in crystallised knowledge units. If we did not have those, every time we woke up, how would we know who we are?" These knowledge units help to assess threats - via the amygdala - based on experience. Ralph Adolphs, a neurologist at the University of Iowa, found that if the amygdala was damaged, the ability of a person to recognise expressions of fear was impaired. A separate study by Adolphs with Simon Baron-Cohen at Cambridge University showed that amygdala damage had a bigger negative impact on the brain's ability to recognise social emotions, while more basic emotions seemed unaffected. This work on the amygdala shows it is a key part of the threat-assessment response and, in no small part, in the formation of beliefs. Damage to this alarm bell - and subsequent inability to judge when a situation might be dangerous - can be life-threatening. In hunter-gatherer days, beliefs may have been fundamental to human survival. Neuroscientists have long looked at brains that do not function properly to understand how healthy ones work. Researchers of belief formation do the same thing, albeit with a twist. "You look at people who have delusions," says Halligan. "The assumption is that a delusion is a false belief. That is saying that the content of it is wrong, but it still has the construct of a belief." In people suffering from prosopagnosia, for example, parts of the brain are damaged so that the person can no longer recognise faces. In the Cotard delusion, people believe they are dead. Fregoli delusion is the belief that the sufferer is constantly being followed around by people in disguise. Capgras' delusion, named after its discoverer, the French psychiatrist Jean Marie Joseph Capgras, is a belief that someone emotionally close has been replaced by an identical impostor. Until recently, these conditions were regarded as psychiatric problems. But closer study reveals that, in the case of Capgras' delusion for example, a significant proportion of sufferers had lesions in their brain, typically in the right hemisphere. "There are studies indicating that some people who have suffered brain damage retain some of their religious or political beliefs," says Halligan. "That's interesting because whatever beliefs are, they must be held in memory." Another route to understanding how beliefs form is to look at how they can be manipulated. In her book on the history of brainwashing, Taylor describes how everyone from the Chinese thought reform camps of the last century to religious cults have used systematic methods to persuade people to change their ideas, sometimes radically. The first step is to isolate a person and control what information they receive. Their former beliefs need to be challenged by creating uncertainty. New messages need to be repeated endlessly. And the whole thing needs to be done in a pressured, emotional environment. "Beliefs are mental objects in the sense that they are embedded in the brain," says Taylor. "If you challenge them by contradiction, or just by cutting them off from the stimuli that make you think about them, then they are going to weaken slightly. If that is combined with very strong reinforcement of new beliefs, then you're going to get a shift in emphasis from one to the other." The mechanism Taylor describes is similar to the way the brain learns normally. In brainwashing though, the new beliefs are inserted through a much more intensified version of that process. This manipulation of belief happens every day. Politics is a fertile arena, especially in times of anxiety. "Stress affects the brain such that it makes people more likely to fall back on things they know well - stereotypes and simple ways of thinking," says Taylor. "It is very easy to want to do that when everything you hold dear is being challenged. In a sense, it was after 9/11." The stress of the terror attacks on the US in 2001 changed the way many Americans viewed the world, and Taylor argues that it left the population open to tricks of belief manipulation. A recent survey, for example, found that more than half of Americans thought Iraqis were involved in the attacks, despite the fact that nobody had come out and said it. This method of association uses the brain against itself. If an event stimulates two sets of neurons, then the links between them get stronger. If one of them activates, it is more likely that the second set will also fire. In the real world, those two memories may have little to do with each other, but in the brain, they get associated. Taylor cites an example from a recent manifesto by the British National Party, which argues that asylum seekers have been dumped on Britain and that they should be made to clear up rubbish from the streets. "What they are trying to do is to link the notion of asylum seekers with all the negative emotions you get from reading about garbage, [but] they are not actually coming out and saying asylum seekers are garbage," she says. The 9/11 attacks highlight another extreme in the power of beliefs. "Belief could drive people to agree to premeditate something like that in the full knowledge that they would all die," says Halligan of the hijacker pilots. It is unlikely that beliefs as wide-ranging as justice, religion, prejudice or politics are simply waiting to be found in the brain as discrete networks of neurons, each encoding for something different. "There's probably a whole combination of things that go together," says Halligan. And depending on the level of significance of a belief, there could be several networks at play. Someone with strong religious beliefs, for example, might find that they are more emotionally drawn into certain discussions because they have a large number of neural networks feeding into that belief. "If you happen to have a predisposition, racism for example, then it may be that you see things in a certain way and you will explain it in a certain way," says Halligan. He argues that the reductionist approach of social neuroscience will alter the way people study society. "If you are brain scanning, what are the implications for privacy in terms of knowing another's thoughts? And being able to use those, as some governments are implying, in terms of being able to detect terrorists and things like that," he says. "If you move down the line in terms of potential uses for these things, you have potential uses for education and for treatments being used as cognitive enhancers." So far, social neuroscience has provided more questions than answers. Ralph Adolphs of the University of Iowa looked to the future in a review paper for Nature. "How can causal networks explain the many correlations between brain and behaviour that we are discovering? Can large-scale social behaviour, as studied by political science and economics, be understood by studying social cognition in individual subjects? Finally, what power will insights from cognitive neuroscience give us to influence social behaviour, and hence society? And to what extent would such pursuit be morally defensible?" The answers to those questions may well shape people's understanding of what it really means to believe. From checker at panix.com Wed Jul 6 00:29:03 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:29:03 -0400 (EDT) Subject: [Paleopsych] NYT Op-Ed: The Big Bang Message-ID: The Big Bang New York Times Op-Ed, 5.7.3 http://www.nytimes.com/2005/07/03/opinion/03grinspoon.html By DAVID GRINSPOON Boulder, Colo. THE future wasn't supposed to be like this. Not for space-age kids like me, growing up enchanted by the Apollo Moon landings and Arthur C. Clarke's "2001: A Space Odyssey." By now we should be living on the Moon and departing in marvelous ships for the outer solar system, while new technologies gradually make life back on Earth more bountiful and harmonious. Instead 2001 and the years since have been marked by terrorism and conflict. Starvation and environmental destruction have not been eradicated or even stemmed. We have, for now, lost the ability to send people to the Moon, let alone Jupiter and beyond, and, for many of us, the future is not as hopeful a place as it once seemed. Yet tomorrow, we should see one tiny part of Mr. Clarke's grand vision realized - through NASA's Deep Impact mission (an unfortunate echo of a less visionary film). The part of Mr. Clarke's vision I refer to is a small scene in "2001," halfway through the book, that didn't even make the movie. The Saturn-bound scientists, having rounded Mars and now approaching Jupiter, pass close by a small asteroid. They greet this rocky celestial nomad by shooting it with a slug of metal that explodes into the asteroid, leaving a new crater and a brief puff of vapor that soon vanishes into the void. This Sunday night, if all goes as planned, NASA will finally pull off this same stunt, firing a three-foot-wide 820-pound copper barrel directly into the path of a nine-mile-long, potato-shaped comet by the name of Tempel 1. The two will collide at 23,000 miles an hour while a mother craft photographs the action from what one hopes will be a safe distance, and sends the pictures home to us at the speed of light. Why? So we can watch what happens. We stand to learn a lot about impact cratering - one of the major forces that has shaped all the worlds of our solar system. We will also have the chance to peer into the newly formed crater and observe the ice and vapor blasted back into space, thereby learning what lies within this frigid little world. When I describe this mission to people outside the community of space scientists and enthusiasts, it receives mixed reactions. Some feel that this is a fine hello to a new world, blasting away at it just to see what happens, like greeting a stranger by shooting first and asking questions later. A lawsuit has even been filed in a Russian court by a 45-year-old mother of two in Moscow, demanding that the mission be called off on the basis of its environmental and spiritual, well, impact. This legal action seems even more certainly doomed than the spacecraft itself (which may miss its target). Yet perhaps it does epitomize the concerns of many who wonder why we would do such a thing. Aren't we going too far to satisfy our curiosity here, acting like cruel, senseless boys blowing up frogs for the fun of it? Um, no. This explosion is not going to hurt anyone or anything. Here's an analogy. You would be justifiably concerned if, in order to learn about shorelines, some scientist decided to dig up your favorite beach. But you wouldn't object if she took a few grains of sand to study. There are something like one trillion comets larger than one mile in diameter, several hundred for each human on Earth, in this solar system alone, and countless more in the wider universe. So even if we destroyed Tempel 1 entirely, we would not be making a dent in the cometary sandbox. What's more, this mission will not demolish the comet, alter its course, or otherwise affect the cosmic scheme. Comets collide with other celestial objects all the time. The only thing extraordinary about this particular impact is that we engineered it. Deep Impact will simply make one more small hole in an object that, like all planets large and small, has been repeatedly dinged by colliding space debris since our solar system's origin 4.6 billion years ago. It is those dusky beginnings that this experiment can illuminate. Beneath the dirty ice crust of a comet like Tempel 1 is material that has been in deep freeze since the birth of our solar system. Mixed into this timeless frozen treat are organic molecules like those that seeded the young Earth with raw materials for making life. This ice may hold some buried chapters of the story of our origin. As H. G. Wells, the Arthur C. Clarke of the paleoindustrial age, once wrote: "There is no way back into the past. The choice is the Universe - or nothing." It has been said that the dinosaurs ultimately got snuffed because they lacked a space program. Sooner or later a killer comet will again cross Earth's path, threatening all life. Only next time, armed with knowledge about comets and space engineering, life on Earth will have a fighting chance. Someday, some of our descendants may decide to declare independence from this planet, seeking a more perfect union with the cosmos from which we spring. If so, then our current, tentative efforts in space may carry evolutionary significance equal to life's first forays from the oceans onto land. Given the recent reckless talk from the Department of Defense about introducing offensive weapons into space, Deep Impact will probably be seen in some quarters as more evidence of American aggression. In reality, it is the opposite - a peaceful gift from our nation to the world. Deep Impact is pure exploration. In this sense, we have evolved. Unlike Apollo, which was meant in part as a cold war threat to the Russians, Deep Impact really is for all humankind: it could further our understanding of where we all came from. Of course, explosions are cool (when they aren't hurting anyone). They're also often quite beautiful. Why, after all, do we love to watch fireworks? The flash of Deep Impact exploding into Tempel 1 may be visible from Earth through telescopes (and even, just possibly, to the naked eye, but not from the Eastern United States) at 1:52 a.m. Eastern time on July 4, above the bright star Spica, and to the left of Jupiter. Public events, showing live images from the world's best telescopes and, 10 minutes later, the first pictures from Deep Impact itself, are planned at many science museums. If successful, first-ever images of the approaching comet, the brilliant impact, the new crater and the receding icy nucleus will be seen soon thereafter. The scientific analysis that reveals the true meaning will be slower in coming, but once it arrives, the knowledge will be here as long as we are. David Grinspoon, a planetary scientist at the Southwest Research Institute, is the author of "Lonely Planets: The Natural Philosophy of Alien Life." From checker at panix.com Wed Jul 6 00:29:09 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:29:09 -0400 (EDT) Subject: [Paleopsych] NYT: O.K., Japan Isn't Taking Over the World. But China... Message-ID: O.K., Japan Isn't Taking Over the World. But China... New York Times, 5.7.3 http://www.nytimes.com/2005/07/03/weekinreview/03port.html By [3]EDUARDO PORTER NOT even 20 years have passed since the apparently unstoppable Japanese economic juggernaut struck fear in the hearts of Americans, and now China has emerged to be seen as the new economic menace threatening the nation's vital strategic interests. America's boom in the 1990's, coupled with Japan's decline into an economic quagmire through much of the decade, quelled most fears that the Japanese were going to eat our economic lunch. But now China has set out to snap up everything from Unocal to Maytag, not to mention a steady diet of United States Treasury bonds. And many of the leading voices who worried about Japan in the 1980's are warning that China presents a much bigger and more complex conundrum. "In retrospect I probably did overstate the nature of the Japanese challenge," said Chalmers Johnson, a prominent expert on Asia who in the early 1990's argued that Japan was "the only nation with real leverage over the United States." But, he added, "China is several orders of magnitude different from Japan." China is not only much bigger and more populous. Its economy is likely to become the largest in the world at some point in the next 50 years. As China keeps growing at a rubber-burning pace, competition with the United States over energy resources alone could cause substantial tension. Americans' fear of Japan's ascendancy in the 1980's was inspired by economics and pride. The growing bilateral trade deficit, as Japanese companies acquired leadership in industries that were once dominated by American businesses, cast a pall on America's self-confidence. The Japanese purchase of high-profile American assets, whether Columbia Pictures or the Pebble Beach golf course, just rubbed it in. Relations with China have a more complex geopolitical dimension. Unlike Japan, China is likely to become a military power. And it is not an unconditional ally. From Taiwan to the Middle East, the strategic interests of China diverge from America's. As it throws its weight around, to secure supplies of energy, say, or to avail itself of strategic technology, China can cause American policy makers no end of discomfort. For instance, if the United States government were to block the China National Offshore Oil Corporation from acquiring Unocal, it might just push China into cutting energy deals that the United States government would rather it did not make with countries like Russia or Iran. "Clearly the relationship between the United States and China is much more ambivalent than that between the United States and Japan," said Clyde V. Prestowitz, a trade negotiator during the Reagan administration who in the 1980's warned that Japan's ascent could eclipse the United States' power and compromise its prosperity. Those differences might warrant a more careful review of deals like the attempted purchase of Unocal by China's state run oil company. Robert B. Reich, the former secretary of labor in the Clinton administration who as a Harvard professor in the early 1990's argued that big Japanese investments in the United States were not threatening, says today that a Chinese acquisition of a potentially strategic asset like Unocal could be problematic. "In economic terms there is no reason to block Chinese ownership of U.S. assets just as there was no reason to block Japan from buying U.S. assets in the 1980's," Mr. Reich said. "But in political terms in 2005 there may be a reason to take seriously the downside of China owning Unocal." But even those most concerned about China's rise up the economic, political and military ladder recognize another, perhaps more important difference between China and 1980's Japan. The United States has a vested interest in China's success. For starters, whereas Japan's success at the time was inevitably seen as America's failure, today American businesses are all rooting for China to succeed. Because they own a lot of it. "It was virtually impossible for a foreigner to make an acquisition in Japan," Mr. Prestowitz said. "In China its 'y'all come.' American business is part of the China lobby, not the anti-China lobby." Some of Japan's old foes view America's growing interdependence with China with suspicion. "Interdependence means dependence," said Susan J. Tolchin, a professor of public policy at George Mason University who in 1993 was the co-author of "Selling Our Security," which argued against allowing foreign investment in American technology companies. "If we lose our economic independence we are going to lose our independence of movement on foreign policy." Yet most acknowledge that China's transformation from a struggling Communist state to a prosperous nation with a growing stake in the global system of market economies is in America's best interest. For example, if the Chinese central bank owns oodles of United States Treasury bonds, it has a reason not to want to destabilize the bond market. "I cannot see how a rich bourgeois China could be not in our interest," Mr. Johnson said. "If we're interested in our security we should establish collaborative ties with China right now." The difference in emphasis appears in a shift in Mr. Prestowitz's writing about Asia. In 1988, he published "Trading Places: How We Are Giving Our Future to Japan and How to Reclaim It." This year, he published "Three Billion New Capitalists: The Great Shift of Wealth and Power to the East." Today, Mr. Prestowitz said, the biggest risk is not that China will succeed in rising to become an economic superpower. The biggest risk is that it will fail. From checker at panix.com Wed Jul 6 00:29:18 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:29:18 -0400 (EDT) Subject: [Paleopsych] NYT: Blockbuster Drugs Are So Last Century Message-ID: Blockbuster Drugs Are So Last Century New York Times, 5.7.3 http://www.nytimes.com/2005/07/03/business/yourmoney/03drug.html By [3]ALEX BERENSON INDIANAPOLIS DRUG companies do an awful job of finding new medicines. They rely too much on billion-dollar blockbuster drugs that are both overmarketed and overprescribed. And they have been too slow to disclose side effects of popular medicines. Typical complaints from drug industry critics, right? Well, yes. Only this time they come from executives at Eli Lilly, the sixth-largest American drug maker and the company that invented Prozac. From this placid Midwestern city, well removed from the Boston-to-Washington corridor that is the core of the pharmaceutical industry, Lilly is ambitiously rethinking the way drugs are discovered and sold. In a speech to shareholders in April, Sidney Taurel, Lilly's chief executive, presented the company's new strategy in a pithy phrase: "the right dose of the right drug to the right patient at the right time." In other words, Lilly sees its future not in blockbuster medicines like Prozac that are meant for tens of millions of patients, but rather in drugs that are aimed at smaller groups and can be developed more quickly and cheaply, possibly with fewer side effects. There is no guarantee, of course, that Lilly will succeed. And some Wall Street analysts complain about the recent track record of the company, saying that it has habitually overpromised the potential of its drugs and taken one-time charges that distort its reported profits. In the last year, Lilly's stock has fallen 21 percent, while shares in the average big drug maker have been flat. Still, since late 2001, Lilly's labs have produced five truly new drugs, including treatments for osteoporosis, depression and lung cancer. The total exceeds that of many of its much-larger competitors. And at a time when the drug industry seems adrift, that Lilly has any vision at all for the future is striking. "The challenge for us as an industry, as a company, is to move more from a blockbuster model to a targeted model," Mr. Taurel said at Lilly's headquarters here recently. "We need a better value proposition than today." For five years, drug companies have struggled to bring new medicines to market. But Lilly executives say they believe that the drought is not permanent. Advances in understanding the ways that cells and genes work will soon lead to important new drugs, said Peter Johnson, executive director of corporate strategy. Moreover, Lilly expects that drug makers without breakthrough medicines that are either the first or the best in their categories will face increasing pressure from insurers to cut prices or lose coverage. If that vision is correct, the industry's winners will be companies that invest heavily in research and differentiate themselves by focusing on a few diseases instead of on building size and cutting costs through mergers, as [4]Pfizer has done. Lilly, which spends nearly 20 percent of its sales on research, compared with about 16 percent for the average drug company, may be well positioned for the future. "We do not believe that size pays off for anybody, especially size acquired in an acquisition," Mr. Taurel said. But if Lilly is wrong about the industry's direction, or if its research efforts fail, it could wind up like [5]Merck, the third-biggest American drug company, which has also adamantly opposed mergers and bet instead on its labs. After its own eight-year drought of major new drugs, Merck has had a 65 percent decline in its stock price since 2000, and its chief executive was forced out in May. Mr. Johnson acknowledges that Lilly's strategy is risky. "You can't make a discovery operation invent what you want them to invent," he said. So Lilly is seeking to improve its odds and to cut research costs by changing the way it develops drugs, said Dr. Steven M. Paul, president of the company's laboratories. Bringing a drug to market cost more than $900 million on average in 2003, compared with $230 million in 1987, according to estimates from Lilly and industry groups. But the public's willingness to accept side effects is shrinking, and some drug-safety experts and lawmakers want even larger and longer clinical trials for new drugs, increasing development costs. If nothing changes, Lilly expects that by 2010, the cost of finding a single new drug may reach $2 billion by 2010, an unsustainable amount, Dr. Paul said. "We've got to do something to reduce the costs," he added. The biggest expense in drug development comes not from early-stage research, he said, but from the failure of drugs after they have left the labs and been tested in humans. A drug that has moved into first-stage human clinical trials now has only about an 8 percent chance of reaching the market. Even in late-stage trials, about half of all drugs fail, often because they do not prove better than existing treatments. To change that, Lilly is focusing its research efforts on finding biomarkers - genes or other cellular signals that will indicate which patients are most likely to respond to a given drug. Other drug makers are also searching for biomarkers, but Lilly executives are the most vocal in expressing their belief that this area of research will fundamentally change the way drugs are developed. Using biomarkers should make drugs more effective and reduce side effects, Dr. Paul said. If all goes as planned, the company will know sooner whether its drugs are working, and will develop fewer drugs that fail in clinical trials. The company may even be able to use shorter, smaller clinical trials because its drugs will demonstrate their effectiveness more quickly. To improve its chances further, Lilly has focused its research efforts on four types of diseases: diabetes, cancer, mental illness and some heart ailments. In each category, it has had a history of successful drugs. The company hopes to reduce the cost of new development to about $700 million a drug by 2010. Because Lilly now spends about $2.7 billion annually on research, that figure would imply that the company could develop as many as four new drugs a year, compared with just one a year if current trends do not change. Among the company's most promising drugs in development are ruboxistaurin, for diabetes complications; arzoxifene, for the prevention of osteoporosis and breast cancer; and enzastaurin, for brain tumors and other cancers. The flip side of Lilly's plan is that drugs it develops may be used more narrowly than current treatments. For example, the company may find that a diabetes drug works best in patients under 40 with a specific genetic marker, and enroll only those patients in its clinical trials. While doctors can legally prescribe any medicine for any reason once it is on the market, insurers would probably balk at covering the drug for diabetics over 40 or for patients without the genetic marker. "The old model was, one size fits a whole lot of people," said Mr. Johnson, Lilly's strategist. Last month, Lilly's vision of targeted therapies gained some ground - albeit at another company. The Food and Drug Administration approved BiDil, a heart drug from [6]NitroMed that is intended for use by African-Americans. The approval, based on a clinical trial that enrolled only black patients, was the first ever for a drug meant for one racial group. While race can be a crude characterization of groups, it can serve as an effective biomarker, scientists said. Lilly's road map may look appealing. But some analysts question whether the company is as different from the rest of the industry as it would like to believe. While it professes to see a future of narrowly marketed medicines, Lilly is more dependent than any other major drug maker on a single blockbuster drug: Zyprexa, its treatment for schizophrenia and manic depression. Zyprexa accounted for about $4.4 billion in sales last year, 30 percent of the company's total sales. And while Lilly executives say they want to avoid marketing its drugs too heavily or in anything less than a forthright way, federal prosecutors in Philadelphia are investigating its marketing practices for Zyprexa and Prozac. Last month, Lilly said it would pay $690 million to settle 8,000 lawsuits that contended that Zyprexa could cause obesity and diabetes and that the company had not properly disclosed that risk. Lilly says that it acted properly in marketing Zyprexa and that is cooperating with the federal investigation. Still, the controversy has hurt Zyprexa sales, which fell 8 percent in the United States last year. Some of Lilly's newest drugs have been commercial disappointments. The company and analysts hoped that annual sales of Xigris, a treatment introduced in late 2001 for a blood infection called sepsis, could reach $1 billion; Xigris's sales were $200 million last year. Sales of Strattera, for attention deficit disorder, slowed after a report in December that the drug can cause a rare but serious form of liver damage. Michael Krensavage, an analyst at Raymond James & Associates who rates Lilly shares as underperform, said that Lilly's emphasis on targeted therapies might be a defensive response to the industry's recent inability to produce blockbusters. Rather than targeted treatments, "drug companies would hope to produce a medicine that works for everybody," Mr. Krensavage said. "That's certainly the goal." Mr. Krensavage also criticized Lilly's accounting, noting that the company has taken one-time charges in each of the last three years that have muddied its financial results. Lilly said its accounting complied with all federal rules. Despite the company's recent stumbles with Zyprexa, other analysts say Lilly is well positioned, and they praise Mr. Taurel for looking for innovative ways to lower the cost of drug development. "Sidney has a better concept of what's happening outside his four walls and is far better in reflecting that in how the company runs on a day-to-day basis than any of his peers," said Richard Evans, an analyst at Sanford C. Bernstein & Company. Mr. Taurel acknowledged Lilly's dependence on Zyprexa and the fact that some new drugs had not met expectations. But he said the transition to targeted therapies would take years, if not decades. With earnings last year of $3.1 billion, before one-time charges, and no major patent expirations before 2011, Lilly can afford to make long-term bets, he said. "Our model needs to evolve," he said. "For the industry and for Lilly." From checker at panix.com Wed Jul 6 00:32:22 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:32:22 -0400 (EDT) Subject: [Paleopsych] CHE: A glance at the current issue of Academic Medicine: Cultural differences in end-of-life medical care Message-ID: A glance at the current issue of Academic Medicine: Cultural differences in end-of-life medical care The Chronicle of Higher Education: Magazine & journal reader News bulletin from the Chronicle of Higher Education, 5.7.4 http://chronicle.com/prm/daily/2005/07/2005070101j.htm Medical students and residents often feel unprepared to care for dying patients, according to a series of articles in the journal, which is published by the Association of American Medical Colleges. Baback B. Gabbay, a resident in psychiatry at the University of California at Los Angeles, and his co-authors found that residents in Japan are more likely than their American counterparts to withhold information about a terminal diagnosis from a patient, telling only the patient's family instead. While medical educators in the United States strongly favor disclosing such information to patients, residents often struggle when that policy conflicts with a patient's cultural traditions. Another article describes the emotional reactions of third-year medical students to their "most memorable" patient deaths, concluding that students are deeply affected, even when they do not have close contact with the patient. Jennifer Rhodes-Kropf, an assistant professor of medicine at Yeshiva University's Albert Einstein College of Medicine, and her co-authors found that medical professors rarely hold "debriefing" sessions to allow students to discuss their experiences. Instead, they expect students to remain stoic. Dr. Gabbay's article, "Negotiating End-of-Life Decision Making: A Comparison of Japanese and U.S. Residents' Approaches," is available online at http://www.academicmedicine.org/cgi/content/full/80/7/617 The rest of the issue is available to subscribers only at http://www.academicmedicine.org/content/vol80/issue7/ --By Katherine S. Mangan ------------------- Here's the contents of this issue of Academic Medicine. Volume 80(7) July 2005 2005 Association of American Medical Colleges ISSN: 1040-2446 1. Redesigning Clinical Education: A Major Challenge for Academic Health Centers. Whitcomb, Michael E. MD pg. 615-616 2. A New Item in the Journal pg. 616 3. Negotiating End-of-Life Decision Making: A Comparison of Japanese and U.S. Residents' Approaches. Gabbay, Baback B. MD; Matsumura, Shinji MD, MSHS; Etzioni, Shiri MD; Asch, Steven M. MD, MPH; Rosenfeld, Kenneth E. MD; Shiojiri, Toshiaki MD; Balingit, Peter P. MD; Lorenz, Karl A. MD, MSHS pg. 617-621 4. Residents' End-of-Life Decision Making with Adult Hospitalized Patients: A Review of the Literature. Gorman, Todd E. MD, FRCP(C); Ahern, Stephane P. MD, FRCP(C); Wiseman, Jeffrey MD, FRCP(C), MA; Skrobik, Yoanna MD, FRCP(C) pg. 622-633 5. "This is just too awful; I just can't believe I experienced that ": Medical Students' Reactions to Their "Most Memorable" Patient Death. Rhodes-Kropf, Jennifer MD; Carmody, Sharon S. MD; Seltzer, Deborah; Redinbaugh, Ellen PhD; Gadmer, Nina MHA; Block, Susan D. MD; Arnold, Robert M. MD[Featured Topic Research Report] pg. 634-640 6. Third-Year Medical Students' Experiences with Dying Patients during the Internal Medicine Clerkship: A Qualitative Study of the Informal Curriculum. Ratanawongsa, Neda MD; Teherani, Arianne PhD; Hauer, Karen E. MD pg. 641-647 7. "It was haunting": Physicians' Descriptions of Emotionally Powerful Patient Deaths. Jackson, Vicki A. MD, MPH; Sullivan, Amy M. EdD; Gadmer, Nina M. MHA; Seltzer, Deborah; Mitchell, Ann M. PhD, RN; Lakoma, Mathew D.; Arnold, Robert M. MD; Block, Susan D. MD pg. 648-656 8. Teaching and Learning End-of-Life Care: Evaluation of a Faculty Development Program in Palliative Care. Sullivan, Amy M. EdD; Lakoma, Matthew D.; Billings, J Andrew MD; Peters, Antoinette S. PhD; Block, Susan D. MD; the PCEP Core Faculty pg. 657-668 9. The Palliative Care Clinical Evaluation Exercise (CEX): An Experience-Based Intervention for Teaching End-of-Life Communication Skills. Han, Paul K. J. MD, MA, MPH; Keranen, Lisa B. PhD; Lescisin, Dianne A. MHPE; Arnold, Robert M. MD pg. 669-676 10. Cover Note: Indiana University School of Medicine. Perry, Pamela Su pg. 677 11. Blindness. Saramago, Jose pg. 678 12. Commentary. Miksanek, Tony MD pg. 679 13. How Can Physicians' Learning Styles Drive Educational Planning? Armstrong, Elizabeth PhD; Parsa-Parsi, Ramin MD, MPH pg. 680-684 14. Teaching Evidence-Based Medicine: Should We Be Teaching Information Management Instead? Slawson, David C. MD; Shaughnessy, Allen F. PharmD pg. 685-689 15. Responsibly Managing the Medical School-Teaching Hospital Power Relationship. Chervenak, Frank A. MD; McCullough, Laurence B. PhD pg. 690-693 16. Self-Reflection in Multicultural Training: Be Careful What You Ask For. Murray-Garcia, Jann L. MD, MPH; Harrell, Steven; Garcia, Jorge A. MD, MS; Gizzi, Elio MD; Simms-Mackey, Pamela MD pg. 694-701 17. The Irony of Osteopathic Medicine and Primary Care. Cummings, Mark PhD; Dobbs, Kathleen J. PA-C, MS pg. 702-705 18. Resident Teaching: A Tale of Two Places in Time. Wilson, Lynn D. MD, MPH pg. 705 19. Considering the Culture of Disability in Cultural Competence Education. Eddey, Gary E. MD; Robey, Kenneth L. PhD pg. 706-712 From checker at panix.com Wed Jul 6 00:32:27 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:32:27 -0400 (EDT) Subject: [Paleopsych] SW: Einstein on Physics and Progress Message-ID: History of Science: Einstein on Physics and Progress http://scienceweek.com/2005/sw050708-5.htm The following points are made by Albert Einstein (Physics Today 2005 June): 1) If philosophy is interpreted as a quest for the most general and comprehensive knowledge, it obviously becomes the mother of all scientific inquiry. But it is just as true that the various branches of science have, in their turn, exercised a strong influence on the scientists concerned and, beyond that, have affected the philosophical thinking of each generation. Let us glance, from this point of view, at the development of physics and its influence on the conceptual framework of the other natural sciences during the last hundred years. 2) Since the Renaissance, physics has endeavored to find the general laws governing the behavior of material objects in space and time. To consider the existence of these objects as a problem was left to philosophy. To the scientist, the celestial bodies, the objects on Earth, and their chemical peculiarities, simply existed as real objects in space and time, and his task consisted solely in abstracting these laws from experience by way of hypothetical generalizations. 3) The laws were supposed to hold without exceptions. A law was considered invalidated if, in a single case, any one of its properly deduced conclusions was disproved by experience. In addition, the laws of the external world were also considered to be complete, in the following sense: If the state of the objects is completely given at a certain time, then their state at any other time is completely determined by the laws of nature. This is just what we mean when we speak of "causality." Such was approximately the framework of the physical thinking a hundred years ago. 4) As a matter of fact, the framework was even more restrictive than it has been sketched. The objects of the external world were considered to consist of immutable mass points, acting upon each other with well-defined forces eternally attached to them and, under the influence of these forces, carrying out incessant motions to which, in the last analysis, all observable processes could be reduced. 5) From a philosophical point of view, the conception of the world, as it appears to those physicists, is closely related to naive realism, since they looked upon the objects in space as directly given by our sense perceptions. The introduction of immutable mass points, however, represented a step in the direction of a more sophisticated realism. For it was obvious from the beginning that the introduction of these atomistic elements was not induced by direct observation. 6) With the Faraday-Maxwell theory of the electromagnetic field, a further refinement of the realistic conception was unavoidable. It became necessary to ascribe the same irreducible reality to the electromagnetic field, continually distributed in space, as formerly to ponderable matter. But sense experiences certainly do not lead inevitably to the field concept. There was even a trend to represent physical reality entirely by the continuous field, without introducing mass points as independent entities into the theory. 7) Summing up, we may characterize the framework of physical thinking up to a quarter of a century ago as follows: There exists a physical reality independent of substantiation and perception. It can be completely comprehended by a theoretical construction which describes phenomena in space and time -- a construction whose justification, however, lies in its empirical confirmation. The laws of nature are mathematical laws connecting the mathematically describable elements of this construction. They imply complete reality in the sense mentioned before. 8) Under the pressure of overwhelming experimental evidence concerning atomistic phenomena, almost all of today's physicists are now convinced that this conceptual framework --notwithstanding its apparently wide scope -- cannot be retained. What appears untenable to physicists of our times is not only the requirement of complete causality but also the postulate of a reality which is independent of any measurement or observation. Physics Today http://www.physicstoday.org -------------------------------- Related Material: HISTORY OF PHYSICS: EINSTEIN AND BROWNIAN MOTION The following points are made by Giorgio Parisi (Nature 2005 433:221): 1) On 30 April 1905, Einstein completed his doctoral thesis on osmotic pressure, in which he developed a statistical theory of liquid behavior based on the existence of molecules. This work, together with his subsequent paper on "brownian motion", constitutes one of the most important, but often overlooked, contributions that Einstein made to physics. 2) In the closing decades of the 19th century, theoretical physics was in a state of turmoil. The big outstanding questions of that time have been much discussed. Such questions culminated in relativity and quantum mechanics -- theoretical developments in which Einstein's key role is being justly celebrated this year. But it should not be forgotten that the seemingly innocuous observations of Robert Brown (1773-1858) of the irregular motions of a suspension of pollen grains in water -- now known as brownian motion -- also heralded a revolution in physical thought. 3) Although the concepts of atoms and molecules are now universally accepted, this was not the case at the turn of the 20th century. The statistical interpretation by Ludwig Boltzmann ((1844-1906) of the laws of thermodynamics -- a body of work deeply rooted in the ensemble dynamical motion of material atoms -- had many adherents. But there were also many heavyweight dissenters (for a time including Max Planck (1858-1947)), who did not accept that thermodynamics had its origins in the reversible motion of invisible hypothetical particles. And many distinguished physicists of the time (among them Wilhelm Roentgen (1845-1923)) suspected that brownian motion indicated a clear failure of Boltzmann's formulation of the second law of thermodynamics. 4) It was in this context that Einstein's explanation for brownian motion made an initial impression. In particular, Einstein showed that the irregular motion of the suspended particles could be understood as arising from the random thermal agitation of the molecules in the surrounding liquid: these smaller entities act both as the driving force for the brownian fluctuations (through the impact of the liquid molecules on the larger particles), and as a means of damping these motions (through the viscosity experienced by the larger particles). This connection between displacement and the viscosity can be quantitatively expressed in one dimension as a relationship between displacement, viscosity, the universal gas constant, Avogadro's number, the Boltzmann constant, the temperature, and the radius of the suspended particles. This finding went beyond simply confirming the existence of atoms and molecules, and provided a new way of determining Avogadro's number. As Einstein himself remarked, the consequence of this relation is that one can see, directly through a microscope, a fraction of the thermal energy manifest as mechanical energy. By proving that a statistical mechanics description could explain quantitatively brownian motion, all doubts concerning Boltzmann's statistical interpretation of the thermodynamic laws suddenly faded.(1-3) References (abridged): 1. Pais, A. Subtle is the Lord... (Oxford Univ. Press, 1982) 2. Kuhn, T. S. Black Body Theory and the Quantum Discontinuity 1894-1911 (Oxford Univ. Press, 1978) 3. Mezard, M., Parisi, G. & Virasoro, M. A. Spin Glass Theory and Beyond (World Scientific, Singapore, 1987) Nature http://www.nature.com/nature -------------------------------- Related Material: HISTORY OF PHYSICS: EINSTEIN AND RADIATION The following points are made by Daniel Kleppner (Physics Today 2005 February): 1) Albert Einstein had a genius for extracting revolutionary theory from simple considerations: From the postulate of a universal velocity he created special relativity; from the equivalence principle he created general relativity; from elementary arguments based on statistics he discovered energy quanta. His 1905 paper on quantization of the radiation field (often referred to, inaccurately, as the photoelectric-effect paper) was built on simple statistical arguments, and in subsequent years he returned repeatedly to questions centered on statistics and thermal fluctuations. 2) In 1909, Einstein showed that statistical fluctuations in thermal radiation fields display both particle-like and wave-like behavior. His was the first demonstration of what would later become the principle of complementarity. In 1916, when he turned to the interplay of matter and radiation to create a quantum theory of radiation, he once again based his arguments on statistics and fluctuations. 3) Einstein's theory of radiation is a treasure trove of physics, for in it one can discern the seeds of quantum electrodynamics and quantum optics, the invention of masers and lasers, and later developments such as atom-cooling, Bose-Einstein condensation, and cavity quantum electrodynamics. Our understanding of the Cosmos comes almost entirely from images brought to us by radiation across the electromagnetic spectrum. Einstein's theory of radiation describes the fundamental processes by which those images are created. 4) Einstein's 1905 paper on quantization endowed Max Planck's quantum hypothesis with physical reality. The oscillators for which Planck proposed energy quantization were fictitious, and his theory for blackbody radiation lacked obvious physical consequences. But the radiation field for which Einstein proposed energy quantization was real, and his theory had immediate physical consequences. His paper, published in March 1905, was the first of his wonder year. In rapid succession he published papers on Brownian motion, special relativity, and his quantum theory of the specific heat of solids. 5) In 1907, his interest shifted to gravity, and he took the first tentative steps toward the theory of general relativity. His struggle with gravitational theory became all-consuming until November 1915, when he finally obtained satisfactory gravitational field equations. During those years of struggle, however, Einstein apparently had a simmering discontent with his understanding of thermal radiation, for in July 1916, he turned to the problem of how matter and radiation can achieve thermal equilibrium. One could argue that 1916 was too soon to deal with that problem because there were serious conceptual obstacles to the creation of a consistent theory. Einstein, in his Olympian fashion, simply ignored them. In the next eight months, he wrote three papers on the subject, publishing the third, and best known, in 1917.[1,2] References (abridged): 1. A. Einstein, Phys. Z. 18, 121 (1917); English translation On the Quantum Theory of Radiation, by D. ter Haar, The Old Quantum Theory, Pergamon Press, New York (1967), p. 167 2. A. Pais, Rev. Mod. Phys. 49, 925 (1977) Physics Today http://www.physicstoday.org From checker at panix.com Wed Jul 6 00:32:36 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:32:36 -0400 (EDT) Subject: [Paleopsych] SW: On the Aether and Broken Symmetry Message-ID: Theoretical Physics: On the Aether and Broken Symmetry http://scienceweek.com/2005/sw050708-6.htm The following points are made by Frank Wilczek (Nature 2005 435:152): 1) The concept that what we ordinarily perceive as empty space is in fact a complicated medium is a profound and pervasive theme in modern physics. This invisible inescapable medium alters the behavior of the matter that we do see. Just as Earth's gravitational field allows us to select a unique direction as up, and thereby locally reduces the symmetry of the underlying equations of physics, so cosmic fields in "empty" space lower the symmetry of these fundamental equations everywhere. Or so theory has it. For although this concept of a symmetry-breaking aether has been extremely fruitful (and has been demonstrated indirectly in many ways), the ultimate demonstration of its validity --cleaning out the medium and restoring the pristine symmetry of the equations -- has never been achieved: that is, perhaps, until now. 2) In new work, Cramer et al.[1] claim to have found evidence that -- for very brief moments, and over a very small volume --experimentalists working at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory in New York have vaporized one symmetry-breaking aether, and produced a more perfect emptiness. This pioneering attempt to decode the richly detailed (in other words, complicated and messy) data emerging from the RHIC experiments is intricate[2], and it remains to be seen whether the interpretation Cramer et al. propose evolves into a consensus. In any case, they've put a challenge on the agenda, and suggested some concrete ways to tackle it. 3) But what exactly is this underlying symmetry of nature that is broken by the aether? How is it broken, and how might it be restored? The symmetry in question is called chiral symmetry, and it involves the behavior of quarks, the principal constituents of the protons and neutrons in atomic nuclei (among other things). Chiral symmetry is easiest to describe if we adopt the slight idealization that the lightest quarks, the up quark (u) and down quark (d), are massless. (In reality their masses are small, on the scale of the energies in play, but not quite zero.) According to the equations of quantum chromodynamics (QCD), the theory that describes quarks and their interactions via the strong nuclear force, the possible transformations among quarks are very restricted. One rule is that u-quarks and d-quarks retain their "flavor" -- that is, a (u) never converts into a (d), nor a (d) into a (u). 4) Quarks also, like the more familiar photons, have an intrinsic spin. If the spin axis is aligned with the direction of motion, then the sense of the rotation defines a handedness, known as chirality, rather like a left- or right-handed screw. The two possible states of chirality of a quark, left and right, are essentially the same concept as left and right circular polarization for photons. The fundamental interaction between quarks and gluons, to which we ultimately trace the strong nuclear force, conserves chirality as well as flavor. Thus a u-quark with left-handed chirality (written uL) never converts into a right-handed uR, and so on. But these extra conservation laws, which follow from the symmetry of QCD's equations, are too good to be true. In reality, one finds that although the rule forbidding changes of flavor holds true, there is no additional conservation law for chirality -- chiral symmetry is broken. 5) The accepted explanation for this mismatch blames a form of aether. The idea is that there is such a powerful attractive interaction between uL-quarks and R-antiquarks (every quark has an antiquark with the opposite charge), and likewise between dL-quarks and R-antiquarks, that the energy gained from their attraction outweighs the cost of creating the particles in the first place. Thus, perfectly empty space, devoid of quarks, is unstable. One can lower the energy of the vacuum by filling it with bound uL-R and dL-R pairs (and their antiparticles, L-uR, L-dR). Physicists call this process the formation of the chiral condensate. In the stable state that finally results, the conservation of chirality is rendered ineffective, as space itself has become a reservoir containing, for example, an indefinite number of uL-quarks.[3-5] References (abridged): 1. Cramer, J., Miller, G., Wu, J. & Yoon, J. -H. preprint at http://www.arxiv.org/nucl-th/0411031 (2004) 2. Kolb, P. F. & Heinz, U. in Quark Gluon Plasma Vol. 3 (eds Hwa, R. C. & Wang, X.-N.) (World Scientific, Singapore, 2004); preprint at http://www.arxiv.org/nucl-th/0305084 (2003) 3. Adcox, K. et al. (The PHENIX collaboration) Nucl. Phys. A (submitted); preprint at http://arxiv.org/nucl-ex/0410003 (2005) 4. Adams, J. et al. (The STAR collaboration) Nucl. Phys. A (submitted); preprint at http://arxiv.org/nucl-ex/05010095 (2005) 5. Back, B. B. et al. (The PHOBOS collaboration) Nucl. Phys. A (in the press); doi:10.1016/j.nuclphysa.2005.03.084 (2005) Nature http://www.nature.com/nature -------------------------------- Related Material: ON THE ETHER CONCEPT IN PHYSICS Notes by ScienceWeek: In the late 19th century, what we now call "classical" physics incorporated the assumed existence of the "ether", a hypothetical medium believed to be necessary to support the propagation of electromagnetic radiation. The famous *Michelson-Morley experiment of 1887 was interpreted as demonstrating the nonexistence of the ether, and this experiment became a significant prelude to the subsequent formulation of Einstein's *special theory of relativity. Although it is often stated outside the physics community that the ether concept was abandoned after the Michelson-Morley experiment, this is not quite true, since the classical ether concept has been essentially reformulated into several modern *field concepts. The following points are made by Frank Wilczek (Physics Today January 1999): 1) Isaac Newton (1642-1727) believed in a continuous medium filling all space, but his equations did not require any such medium, and by the early 19th century the generally accepted ideal for fundamental physical theory was to discover mathematical equations for forces between indestructible atoms moving through empty space. 2) It was Michael Faraday (1791-1867) who revived the idea that space was filled with a medium having physical effects in itself... To summarize Faraday's results, James Clerk Maxwell (1831-1879) adapted and developed the mathematics used to describe fluids and elastic solids, and Maxwell postulated an elaborate mechanical model of electrical and magnetic fields. 3) The achievement of Einstein (1879-1955) in his paper on special relativity was to highlight and interpret the hidden symmetry of Maxwell's equations, not to change them. The Faraday-Maxwell concept of electric and magnetic fields, as media or ethers filling all space, was retained by Einstein. Later, Einstein was dissatisfied with the particle-field dualism inherent in the early atomic theory, and Einstein sought, without success, a unified field theory in which all fundamental particles would emerge as special solutions to the field equations. 4) Following Einstein, Paul Dirac (1902-1984) then showed that photons emerged as a logical consequence of applying the rules of quantum mechanics to Maxwell's electromagnetic ether. This connection was soon generalized so that particles of any sort could be represented as the small-amplitude excitations of quantum fields. Electrons, for example, can be regarded as excitations of an electron field, an ether that pervades all space and time uniformly. Our current and extremely successful theories of the *strong, electromagnetic, and weak forces are formulated as *relativistic quantum field theories with *local interactions. 5) The author states: "Einstein first purified, and then enthroned, the ether concept. As the 20th century has progressed, its role in fundamental physics has only expanded. At present, renamed and thinly disguised, it dominates the accepted laws of physics." Physics Today http://www.physicstoday.org -------------------------------- Notes by ScienceWeek: Michelson-Morley experiment of 1887: Conducted by Albert Michelson (1852-1931) and Edward Morley (1838-1923), the experiment attempted to measure the velocity of the Earth through the "ether" by using an interferometer to detect a difference in the speed of light in the direction of Earth's rotation from the speed perpendicular to this direction. No difference was observed, indicating the absence of an ether "wind". special theory of relativity: Proposed by Einstein in 1905, the special theory refers to inertial (non-accelerated) frames of reference. It assumes physical laws are identical in all frames of reference and that the speed of light in a vacuum is constant throughout the Universe and is independent of the speed of the observer. In general, the special theory gives a unified account of the laws of mechanics and electromagnetism (including optics). The companion theory, the general theory of relativity (1915), deals with general relative motion between accelerated frames of reference, and it is the general theory that led to Einstein's analysis of gravitation. field: In this context, in general, the term "field" refers to a physical quantity (e.g., electric or magnetic field) that varies from point to point in space. strong, electromagnetic, and weak forces: The fundamental forces currently identified in physics are the gravitational force, the electromagnetic force, the nuclear strong force, and the nuclear weak force. The nuclear strong force is the dominant force that acts between hadrons (e.g., the force that binds neutrons and protons in nuclei). (A "hadron" is any object made of *quarks and/or antiquarks). The weak force occurs between leptons (particles without internal structure, e.g., electrons, neutrinos) and hadrons (particles with internal structure, e.g., neutrons and protons); In general, the weak force is responsible for radioactivity. quarks and antiquarks: A quark is a hypothetical fundamental particle, having charges whose magnitudes are one-third or two-thirds of the electron charge, and from which the elementary particles may in theory be constructed. The antiquark is the antimatter quark entity. In general, antiparticles are homologs of elementary particles but with opposite charge. The positron, for example, is the antimatter particle homologous to the electron. Matter composed entirely of antiparticles is called "antimatter". relativistic quantum field theories: In general, a "quantum field theory" is any quantum mechanical theory in which particles are represented by fields whose normal modes of oscillation are quantized. The term is also used to refer to a quantum mechanical theory applied to systems having an infinite number of *degrees of freedom. Quantum electrodynamics, for example, is a particular quantum field theory describing the emission or absorption of photons by charged particles. "Relativistic quantum field theories" are used to describe fundamental interactions between elementary particles (which exhibit relativistic velocities, i.e., velocities approaching the speed of light). local interactions: In this context, a local interaction is an interaction between particles whose quantum mechanical wave functions are confined to a small region of a large system rather than being extended throughout the system. -------------------------------- Related Material: ON FIELD THEORY IN PHYSICS Notes by ScienceWeek: In physics, a field is an entity that acts as intermediary in interactions between particles, and which is distributed over part or all of space, and whose properties are functions of space coordinates, and except for static fields, also functions of time. There is also a quantum-mechanical analog of this entity, in which the function of space and time is replaced by an operator at each point in space-time. The following points are made by Roman Jackiw (Proc. Natl. Acad. Sci. 1998 95:12776): 1) Present-day theory for fundamental processes (i.e., descriptions of elementary particles and forces) is phenomenally successful. Experimental data confirms theoretical prediction, and where accurate calculation and experiments are attainable, agreement is achieved to 6 or 7 figures. Two examples: a) The helium atom ground state energy (*Rydbergs) is experimentally measured as -5.8071394 and theoretically calculated as -5.8071380. b) The muon magnetic dipole moment is experimentally measured as 2.00233184600 and theoretically calculated as 2.00233183478. 2) The theoretical structure within which this success has been achieved is *local field theory, which offers a wide variety of applications, and which provides a model for fundamental physical reality as described by our theories of *strong, electroweak, and gravitational processes. No other framework exists in which one can calculate so many phenomena with such ease and accuracy. 3) But is spite of these successes, today there is little confidence that field theory will advance our understanding of nature at its fundamental workings beyond what has already been achieved. Although in principle all observed phenomena can be explained by present-day field theory, these accounts are still imperfect, requiring ad hoc inputs. Moreover, because of conceptual and technical obstacles, classical gravity theory has not been integrated into the *quantum field description of nongravitational forces: *quantizing the *metric tensor of Einstein's theory produces a quantum field theory beset by infinities that apparently cannot be controlled. 4) These shortcomings are actually symptoms of a deeper lack of understanding concerning *symmetry and symmetry breaking... Physicists are happy in the belief that Nature in its fundamental workings is essentially simple, but observed physical phenomena rarely exhibit overwhelming regularity. Therefore, at the very same time that we construct a physical theory with intrinsic symmetry, we must find a way to break the symmetry in physical consequences of the model. 5) These problems have produced a theoretical impasse for over two decades, and in the absence of new experiments to channel theoretical speculation, some physicists have concluded that it will not be possible to make progress on these questions within field theory, and they have turned to a new structure, "*string theory". In field theory, the quantized excitations are point particles with point interactions, and this gives rise to the infinities. In string theory, the excitations are extended objects -- strings -- with nonlocal interactions; there are no infinities in string theory, and that enormous defect of field theory is absent. 6) Yet in spite of its positive features, until now string theory has provided a framework rather than a definite structure, and a precise derivation of the *Standard Model has yet to be given. The author concludes: "On previous occasions when it appeared that quantum field theory was incapable of advancing our understanding of fundamental physics, new ideas and new approaches to the subject dispelled the pessimism. Today we do not know whether the impasse within field theory is due to a failure of imagination or whether indeed we have to present fundamental physical laws in a new framework, thereby replacing the field theoretic one, which has served us well for over 100 years." Proc. Nat. Acad. Sci. http://www.pnas.org -------------------------------- Notes by ScienceWeek: Rydbergs: A unit of energy used in atomic physics, value = 13.605698 electronvolts. local field theory: In this context, "locality" is the condition that two events at spatially separated locations are entirely independent of each other, provided that the time interval between the events is less than that required for a light signal to travel from one location to the other. For example, the quantum mechanical wave function is a "local" field. strong, electroweak, and gravitational processes: The fundamental forces comprise the gravitational force, the electromagnetic force, the nuclear strong force, and the nuclear weak force. The "electroweak" interactions are a unification of the electromagnetic and nuclear weak interactions, and are described by the Weinberg-Salam theory (sometimes called "quantum flavordynamics"; also called the Glashow-Weinberg-Salam theory). quantum field description: In general, a quantum field theory is a quantum mechanical theory applied to systems having an infinite number of *degrees of freedom. The term is also used to refer to any quantum mechanical theory in which particles are represented by fields whose normal modes of oscillation are quantized (see below). degrees of freedom: In general, the number of independent parameters required to specify the configuration of a system. quantizing: In experimental physics, a quantized variable is a variable taking only discrete multiple values of a quantum mechanical constant. In theoretical physics, "quantizing" means the consistent application of certain rules that lead from classical to quantum mechanics. In general, "quantization" is a transition from a classical theory or a classical quantity to a quantum theory or the corresponding quantity in quantum mechanics. metric tensor: The mathematical statement (involving a set of quantities) that describes the deviation of the Pythagoras theorem in a curved space. symmetry and symmetry breaking: If a theory or process does not change when certain operations are performed on it, the theory or process is said to possess a symmetry with respect to those operations. For example, a circle remains unchanged under rotation or reflection, and a circle therefore has rotational and reflection symmetry. The term "symmetry breaking" refers to the deviation from exact symmetry exhibited by many physical systems, and in general, symmetry breaking encompasses both "explicit" symmetry breaking and "spontaneous" symmetry breaking. Explicit symmetry breaking is a phenomenon in which a system is not quite, but almost, the same for two configurations related by exact symmetry. Spontaneous symmetry breaking refers to a situation in which the solution of a set of physical equations fails to exhibit a symmetry possessed by the equations themselves. string theory: In particle physics, string theory is a theory of elementary particles based on the idea that the fundamental entities are not point-like particles but finite lines (strings), or closed loops formed by strings, the strings one-dimensional curves with zero thickness and lengths (or loop diameters) of the order of the Planck length of 10^(-35) meters. Standard Model: In particle physics, the Standard Model is a theoretical framework whose basic idea is that all the visible matter in the universe can be described in terms of the elementary particles leptons and quarks and the forces acting between them. Leptons are a class of point-like fundamental particles showing no internal structure and no involvement with the strong forces. A quark is a hypothetical fundamental particle, having charges whose magnitudes are one-third or two-thirds of the electron charge, and from which the elementary particles may in theory be constructed. From anonymous_animus at yahoo.com Wed Jul 6 19:52:40 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Wed, 6 Jul 2005 12:52:40 -0700 (PDT) Subject: [Paleopsych] stereotypes In-Reply-To: <200507061800.j66I0CR24744@tick.javien.com> Message-ID: <20050706195240.68173.qmail@web30808.mail.mud.yahoo.com> >>Lieberman suggests that people are likely to pick up on stereotypes, regardless of whether their family or community agrees with them.<< --I think that's true. Perhaps it's rational to avoid people who are "marked" by a large group, regardless of whether you're a member of that group or not. Germans without antisemitism in their families would still have known to avoid shopping at Jewish stores or being seen associating with Jews, picking up on their "marked" status and fearing the consequences of associating with a scapegoated class. Michael ____________________________________________________ Sell on Yahoo! Auctions ? no fees. Bid on great items. http://auctions.yahoo.com/ From andrewsa at newpaltz.edu Wed Jul 6 20:42:34 2005 From: andrewsa at newpaltz.edu (Alice Andrews) Date: Wed, 6 Jul 2005 16:42:34 -0400 Subject: [Paleopsych] David L. Smith: Natural-Born Liars Message-ID: <034501c5826b$3d421660$6501a8c0@callastudios> Dear paleos, A member of this list--David Livingstone Smith--has a recent and very good article on self-deception in Scientific American... http://www.sciam.com/article.cfm?SID=mail&articleID=0007B7A0-49D6-128A-89D683414B7F0000%20 And here's one of my favorite Trivers quote on deception: "One of the most important things to realize about systems of animal communication is that they are not systems for the dissemination of the truth." (From Social Evolution.) All best, Alice ps I'm of the mind that there exists a continuum (like many traits)....That is, I believe that some (people) are more naturally self-deceptive than others.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Wed Jul 6 21:18:31 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:18:31 -0400 (EDT) Subject: [Paleopsych] NYT: Straight, Gay or Lying? Bisexuality Revisited Message-ID: Straight, Gay or Lying? Bisexuality Revisited http://www.nytimes.com/2005/07/05/health/05sex.html By [3]BENEDICT CAREY Some people are attracted to women; some are attracted to men. And some, if Sigmund Freud, Dr. Alfred Kinsey and millions of self-described bisexuals are to be believed, are drawn to both sexes. But a new study casts doubt on whether true bisexuality exists, at least in men. The study, by a team of psychologists in Chicago and Toronto, lends support to those who have long been skeptical that bisexuality is a distinct and stable sexual orientation. People who claim bisexuality, according to these critics, are usually homosexual, but are ambivalent about their homosexuality or simply closeted. "You're either gay, straight or lying," as some gay men have put it. In the new study, a team of psychologists directly measured genital arousal patterns in response to images of men and women. The psychologists found that men who identified themselves as bisexual were in fact exclusively aroused by either one sex or the other, usually by other men. The study is the largest of several small reports suggesting that the estimated 1.7 percent of men who identify themselves as bisexual show physical attraction patterns that differ substantially from their professed desires. "Research on sexual orientation has been based almost entirely on self-reports, and this is one of the few good studies using physiological measures," said Dr. Lisa Diamond, an associate professor of psychology and gender identity at the University of Utah, who was not involved in the study. The discrepancy between what is happening in people's minds and what is going on in their bodies, she said, presents a puzzle "that the field now has to crack, and it raises this question about what we mean when we talk about desire." "We have assumed that everyone means the same thing," she added, "but here we have evidence that that is not the case." Several other researchers who have seen the study, scheduled to be published in the journal Psychological Science, said it would need to be repeated with larger numbers of bisexual men before clear conclusions could be drawn. Bisexual desires are sometimes transient and they are still poorly understood. Men and women also appear to differ in the frequency of bisexual attractions. "The last thing you want," said Dr. Randall Sell, an assistant professor of clinical socio-medical sciences at Columbia University, "is for some therapists to see this study and start telling bisexual people that they're wrong, that they're really on their way to homosexuality." He added, "We don't know nearly enough about sexual orientation and identity" to jump to these conclusions. In the experiment, psychologists at Northwestern University and the Center for Addiction and Mental Health in Toronto used advertisements in gay and alternative newspapers to recruit 101 young adult men. Thirty-three of the men identified themselves as bisexual, 30 as straight and 38 as homosexual. The researchers asked the men about their sexual desires and rated them on a scale from 0 to 6 on sexual orientation, with 0 to 1 indicating heterosexuality, and 5 to 6 indicating homosexuality. Bisexuality was measured by scores in the middle range. Seated alone in a laboratory room, the men then watched a series of erotic movies, some involving only women, others involving only men. Using a sensor to monitor sexual arousal, the researchers found what they expected: gay men showed arousal to images of men and little arousal to images of women, and heterosexual men showed arousal to women but not to men. But the men in the study who described themselves as bisexual did not have patterns of arousal that were consistent with their stated attraction to men and to women. Instead, about three-quarters of the group had arousal patterns identical to those of gay men; the rest were indistinguishable from heterosexuals. "Regardless of whether the men were gay, straight or bisexual, they showed about four times more arousal" to one sex or the other, said Gerulf Rieger, a graduate psychology student at Northwestern and the study's lead author. Although about a third of the men in each group showed no significant arousal watching the movies, their lack of response did not change the overall findings, Mr. Rieger said. Since at least the middle of the 19th century, behavioral scientists have noted bisexual attraction in men and women and debated its place in the development of sexual identity. Some experts, like Freud, concluded that humans are naturally bisexual. In his landmark sex surveys of the 1940's, Dr. Alfred Kinsey found many married, publicly heterosexual men who reported having had sex with other men. "Males do not represent two discrete populations, heterosexual and homosexual," Dr. Kinsey wrote. "The world is not to be divided into sheep and goats." By the 1990's, Newsweek had featured bisexuality on its cover, bisexuals had formed advocacy groups and television series like "Sex and the City" had begun exploring bisexual themes. Yet researchers were unable to produce direct evidence of bisexual arousal patterns in men, said Dr. J. Michael Bailey, a professor of psychology at Northwestern and the new study's senior author. A 1979 study of 30 men found that those who identified themselves as bisexuals were indistinguishable from homosexuals on measures of arousal. Studies of gay and bisexual men in the 1990's showed that the two groups reported similar numbers of male sexual partners and risky sexual encounters. And a 1994 survey by The Advocate, the gay-oriented newsmagazine, found that, before identifying themselves as gay, 40 percent of gay men had described themselves as bisexual. "I'm not denying that bisexual behavior exists," said Dr. Bailey, "but I am saying that in men there's no hint that true bisexual arousal exists, and that for men arousal is orientation." But other researchers - and some self-identified bisexuals - say that the technique used in the study to measure genital arousal is too crude to capture the richness - erotic sensations, affection, admiration - that constitutes sexual attraction. Social and emotional attraction are very important elements in bisexual attraction, said Dr. Fritz Klein, a sex researcher and the author of "The Bisexual Option." "To claim on the basis of this study that there's no such thing as male bisexuality is overstepping, it seems to me," said Dr. Gilbert Herdt, director of the National Sexuality Resource Center in San Francisco. "It may be that there is a lot less true male bisexuality than we think, but if that's true then why in the world are there so many movies, novels and TV shows that have this as a theme - is it collective fantasy, merely a projection? I don't think so." John Campbell, 36, a Web designer in Orange County, Calif., who describes himself as bisexual, also said he was skeptical of the findings. Mr. Campbell said he had been strongly attracted to both sexes since he was sexually aware, although all his long-term relationships had been with women. "In my case I have been accused of being heterosexual, but I also feel a need for sex with men," he said. Mr. Campbell rated his erotic attraction to men and women as about 50-50, but his emotional attraction, he said, was 90 to 10 in favor of women. "With men I can get aroused, I just don't feel the fireworks like I do with women," he said. About 1.5 percent of American women identify themselves bisexual. And bisexuality appears easier to demonstrate in the female sex. A study published last November by the same team of Canadian and American researchers, for example, found that most women who said they were bisexual showed arousal to men and to women. Although only a small number of women identify themselves as bisexual, Dr. Bailey said, bisexual arousal may for them in fact be the norm. Researchers have little sense yet of how these differences may affect behavior, or sexual identity. In the mid-1990's, Dr. Diamond recruited a group of 90 women at gay pride parades, academic conferences on gender issues and other venues. About half of the women called themselves lesbians, a third identified as bisexual and the rest claimed no sexual orientation. In follow-up interviews over the last 10 years, Dr. Diamond has found that most of these women have had relationships both with men and women. "Most of them seem to lean one way or the other, but that doesn't preclude them from having a relationship with the nonpreferred sex," she said. "You may be mostly interested in women but, hey, the guy who delivers the pizza is really hot, and what are you going to do?" "There's a whole lot of movement and flexibility," Dr. Diamond added. "The fact is, we have very little research in this area, and a lot to learn." From checker at panix.com Wed Jul 6 21:18:39 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:18:39 -0400 (EDT) Subject: [Paleopsych] NYT Op-Ed: The Heterosexual Revolution Message-ID: The Heterosexual Revolution New York Times, 5.7.5 http://www.nytimes.com/2005/07/05/opinion/05coontz.html By STEPHANIE COONTZ Olympia, Wash. THE last week has been tough for opponents of same-sex marriage. First Canadian and then Spanish legislators voted to legalize the practice, prompting American social conservatives to renew their call for a constitutional amendment banning such marriages here. James Dobson of the evangelical group Focus on the Family has warned that without that ban, marriage as we have known it for 5,000 years will be overturned. My research on marriage and family life seldom leads me to agree with Dr. Dobson, much less to accuse him of understatement. But in this case, Dr. Dobson's warnings come 30 years too late. Traditional marriage, with its 5,000-year history, has already been upended. Gays and lesbians, however, didn't spearhead that revolution: heterosexuals did. Heterosexuals were the upstarts who turned marriage into a voluntary love relationship rather than a mandatory economic and political institution. Heterosexuals were the ones who made procreation voluntary, so that some couples could choose childlessness, and who adopted assisted reproduction so that even couples who could not conceive could become parents. And heterosexuals subverted the long-standing rule that every marriage had to have a husband who played one role in the family and a wife who played a completely different one. Gays and lesbians simply looked at the revolution heterosexuals had wrought and noticed that with its new norms, marriage could work for them, too. The first step down the road to gay and lesbian marriage took place 200 years ago, when Enlightenment thinkers raised the radical idea that parents and the state should not dictate who married whom, and when the American Revolution encouraged people to engage in "the pursuit of happiness," including marrying for love. Almost immediately, some thinkers, including Jeremy Bentham and the Marquis de Condorcet, began to argue that same-sex love should not be a crime. Same-sex marriage, however, remained unimaginable because marriage had two traditional functions that were inapplicable to gays and lesbians. First, marriage allowed families to increase their household labor force by having children. Throughout much of history, upper-class men divorced their wives if their marriage did not produce children, while peasants often wouldn't marry until a premarital pregnancy confirmed the woman's fertility. But the advent of birth control in the 19th century permitted married couples to decide not to have children, while assisted reproduction in the 20th century allowed infertile couples to have them. This eroded the traditional argument that marriage must be between a man and a woman who were able to procreate. In addition, traditional marriage imposed a strict division of labor by gender and mandated unequal power relations between men and women. "Husband and wife are one," said the law in both England and America, from early medieval days until the late 19th century, "and that one is the husband." This law of "coverture" was supposed to reflect the command of God and the essential nature of humans. It stipulated that a wife could not enter into legal contracts or own property on her own. In 1863, a New York court warned that giving wives independent property rights would "sow the seeds of perpetual discord," potentially dooming marriage. Even after coverture had lost its legal force, courts, legislators and the public still cleaved to the belief that marriage required husbands and wives to play totally different domestic roles. In 1958, the New York Court of Appeals rejected a challenge to the traditional legal view that wives (unlike husbands) couldn't sue for loss of the personal services, including housekeeping and the sexual attentions, of their spouses. The judges reasoned that only wives were expected to provide such personal services anyway. As late as the 1970's, many American states retained "head and master" laws, giving the husband final say over where the family lived and other household decisions. According to the legal definition of marriage, the man was required to support the family, while the woman was obligated to keep house, nurture children, and provide sex. Not until the 1980's did most states criminalize marital rape. Prevailing opinion held that when a bride said, "I do," she was legally committed to say, "I will" for the rest of her married life. I am old enough to remember the howls of protest with which some defenders of traditional marriage greeted the gradual dismantling of these traditions. At the time, I thought that the far-right opponents of marital equality were wrong to predict that this would lead to the unraveling of marriage. As it turned out, they had a point. Giving married women an independent legal existence did not destroy heterosexual marriage. And allowing husbands and wives to construct their marriages around reciprocal duties and negotiated roles - where a wife can choose to be the main breadwinner and a husband can stay home with the children- was an immense boon to many couples. But these changes in the definition and practice of marriage opened the door for gay and lesbian couples to argue that they were now equally qualified to participate in it. Marriage has been in a constant state of evolution since the dawn of the Stone Age. In the process it has become more flexible, but also more optional. Many people may not like the direction these changes have taken in recent years. But it is simply magical thinking to believe that by banning gay and lesbian marriage, we will turn back the clock. Stephanie Coontz, the director of public education for the Council on Contemporary Families, is the author of "Marriage, a History: From Obedience to Intimacy, or How Love Conquered Marriage." From checker at panix.com Wed Jul 6 21:18:51 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:18:51 -0400 (EDT) Subject: [Paleopsych] NYT: A Modern Refrain: My Genes Made Me Do It Message-ID: A Modern Refrain: My Genes Made Me Do It New York Times, 5.7.5 http://www.nytimes.com/2005/07/05/health/05comm.html By KENT SEPKOWITZ, M.D. Our theories about human disease are more the product of current fashion than we would like to admit. But just as the moment influences the hemline and the automobile fender, so too does a type of intellectual currency affect our understanding of how illness happens. Much of the 20th century was spent in pursuit of external causes of disease - cigarettes, E. coli, fatty foods, tick bites. Rather like the hero in an old western, medicine's job was to track down the bad guys, round 'em up and squish 'em before a real commotion got a-goin'. Antibiotics, vaccines, heart pills - these were our weapons in the epic battle between us and them, good versus evil. More recently, though, we have cast our gaze inward, mesmerized by our own adorable DNA. Just last decade, after 40 years of intense flirtation, this relationship was consummated as we cloned the entire human genome. Promises of improved health and longevity soon followed, as we had apparently found our way to the bedrock truths that underlie all illness. But with this orgy of molecular self-admiration has come a fundamental shift in thinking about human disease. We have moved from our long-held premise that the outside world (too much ice cream and flesh-eating bacteria) threatens us to a belief that the trouble arises from something much closer to home - our own double-crossing genes. Although packaged with the glint of modernity, this theory actually draws from something old and wintry - the harsh remedies proposed by John Calvin, predestination's No. 1 guy. According to Calvin, our fate is determined at first creation. Similar to this, the articles of gene-ism would have us believe that our medical fate is sealed by the genes we receive at conception. Seem a bit grim? Maybe not. Our unquestioning acceptance of the gene as prime mover has certain distinct - and ultramodern - advantages. Consider: you are no longer responsible for anything. Sound familiar? Once it was the devil. Now it is the gene that made you do it. You are officially off the hook. It isn't your fault at all. It's your faulty genes. It gets even better. Not only is it not your fault, but you actually are a victim, a victim of your own toxic gene pool. In the Age of Genetics, you no longer have to try to cut out smoking or think twice about gobbling that candy bar in your desk drawer. And forget jogging on a cold morning. The die was cast long ago, from the moment the parental sperm and egg first integrated their spiraling nucleotides. The resulting package of chromosomes has programmed every step of your life. So sit back, relax and leave the driving to someone else. But one problem remains: this new world order is at sharp odds with an older theism, that blame can and must be assigned in every human transaction. We have built a vast judicial-industrial complex that offers lawsuits for every need, satisfying varied urges like the wish for fairness or revenge, for getting rich quick or simply getting your due. This all-blame all-the-time approach applies to much more than determining culpability should a neighbor trip on your lawn and break an arm. It also says that people are responsible for their own health - and illness. It is your fault if you develop cancer or a heart attack because you didn't eat, think or breathe right. You have allowed the corrosive effect of unresolved anger or stress or poor self-esteem to undermine your health. So if you are sick or miserable or both, it's your own darned fault. No wonder we fled. The transition from the chaotic, barking family feud character of lawsuits to the sleek silence of a future devoted to cloning and splicing genes surely derives from something larger than scientific opportunity or our fascination with "Star Trek." How modern to deflect blame suavely onto a poorly understood high-end concept, the manic twitches of deoxynucleic acid. Gosh, biology is so much bigger than we are. Nothing we can do about it, really. Our wholehearted endorsement of the science of no personal responsibility may sour as new insights and new intellectual fashion result in new bedrock truths. A future generation may castigate us for our unblinking narcissism. What were we thinking? How could genes be responsible for red hair and bad memory and atherosclerosis? But if they come after us wagging their stubby fingers, we have an airtight explanation. We'll tell them it was not really our idea, the whole gene thing. No, we will say, we were victims. Victims of fashion. From checker at panix.com Wed Jul 6 21:19:28 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:19:28 -0400 (EDT) Subject: [Paleopsych] The Chronicle: Wired Campus Blog Message-ID: The Chronicle: Wired Campus Blog http://wiredcampus.chronicle.com/ [URL not fixed, as this comes every week.] 5.7.1 Education-technology news from around the Web, brought to you by The Chronicle of Higher Education The Economics of Cheating Administrators at the University of Virginia are investigating claims that almost 35 graduate students [46]used an online answer key to cheat on homework assignments. A first-year student in the economics program reportedly found answers to problems in an introductory course on the Web and proceeded to share the answers with most of his or her peers in the class. Almost everyone in the course could be implicated in the cheating scandal. "I think about all the students were involved in some questionable behavior," said Steven A. Stern, the course's professor. (Richmond Times-Dispatch) Measuring 'Internet Intelligence' College students might be old pros when it comes to downloading music or swapping instant messages, but that doesn't necessarily make them [50]wise to the ways of the Internet. So a team of colleges, along with the Educational Testing Service, is developing a test that gauges students' "Internet intelligence." The Information and Communication Technology Literacy Assessment, as the exam is called, could become popular with professors who bemoan their students' poor Web-research skills. The test measures students' ability to find information online, verify it, and credit it properly. (Associated Press) Justice Department Raids Piracy Dens Four people suspected of working in digital-piracy rings were arrested Wednesday in [54]an ambitious sting operation that spanned 11 countries. The sting, dubbed "Operation Site Down" by the U.S. Department of Justice, included raids of 20 "warez" groups -- underground communities that post pirated software and movies online. Officials did not say whether any of the four people arrested were college students, but an earlier bust conducted by the Justice Department did lead to [55]the arrest of a student at the University of Maryland at College Park. (Los Angeles Times) Beep Beep Forget those tedious piano lessons that your mother made you suffer through as a child. Researchers at the University of Southern California have developed a computerized system that allows the user to [59]play music using a steering wheel and gas and brake pedals. Essentially, the researchers say, the user "drives" his or her way through the music with the system, known as the Expression Synthesis Project. The device permits people to experience playing music without having to first master an instrument, the researchers say. The system is programmed to guide the user through Brahms's Hungarian Dance No. 5 in G minor. (The Chronicle, subscription required) 5.6.30 Supporting Piracy or Making a Point? Will BitTorrent, the network popular with movie swappers, be the next peer-to-peer service to face legal problems? Or is post-Grokster hysteria beginning to set in? Legal experts are debating those questions after discovering [63]a manifesto advocating digital piracy on the Web site of Bram Cohen, the software's creator. Mr. Cohen says he wrote the short polemic in 1999, two years before he started designing BitTorrent. And since releasing the software, he has repeatedly argued that it is meant for legal file swapping, not piracy. But some lawyers say that the memo could [64]damage Mr. Cohen's credibility when he claims that BitTorrent, unlike Grokster and Morpheus, does not endorse copyright infringement. (Wired News) For more on the implications of the Supreme Court's decision in MGM v. Grokster, see [65]an article from The Chronicle by Andrea L. Foster. Ready for the Digital Revolution By the year 2020 almost every piece of research published in the United Kingdom will be available online. And only [69]one in 10 newly-published articles will appear in print, according to a study commissioned by the British Library. That's a "seismic shift" for the publishing industry, says Lynne Brindley, the library's chief executive. The library will prepare, she says, by spending the next three years bolstering its technology for storing and organizing digital material. (BBC News) Peer-to-Peer, Legally The Supreme Court's Grokster decision has put most peer-to-peer networks on shaky legal ground, but it could be just what the doctor ordered for a file-swapping service called Mashboxx. The network is attempting to establish itself as [73]a legal peer-to-peer option by persuading record companies to let people download songs and play them a few times before buying the tunes. (PC Pro) Mashboxx's founder, Wayne Rosso, is [74]a familiar face to observers of the file-sharing wars: He was once president of Grokster. But Mr. Rosso has transformed himself from a thorn in the music industry's side into something of an ally -- much like Shawn Fanning, the founder of Napster, whose SnoCap software helps record companies track their songs on Mashboxx. (The Washington Post) Physicists Are People Too Quantum Diaries, a Web site featuring [78]blogs by researchers, highlights the minutiae of those who study the minutiae. (The Chronicle, subscription required) 5.6.29 RIAA Fires Off More Antipiracy Lawsuits Lawyers for the Recording Industry Association of America might be flush with victory after the Supreme Court's Grokster decision on Monday, but they still have plenty of work to do: Today the RIAA announced a new batch of lawsuits against people suspected of online piracy of copyrighted songs. A total of 784 people were identified in this month's suits, including a number of Grokster users. But recording studios did not say whether any of the defendants were suspected of sharing songs on campus networks. Song-Swapping by Subscription How do you make a music-downloading subscription service more appealing to college students? Try adding a dash of iTunes to the enterprise. Ruckus, a company that offers free music and movie downloads to students at subscribing colleges, has introduced a tool that lets students create public playlists of the tunes they've downloaded using the service. At some universities, students have taken to browsing each others' iTunes folders as a social activity, and Ruckus hopes that it can get its subscribers to do the same. But unlike iTunes, Ruckus will let students download songs from their classmates' computers, instead of having to gather them from the service's centralized database. For more on the social side of file swapping, see [85]an article from The Chronicle by Scott Carlson. Students Still Swapping Software One in three college students considers illegal file sharing to be unequivocally wrong, according to [89]a new survey commissioned by the Business Software Alliance. For software manufacturers, that's hardly a heartening statistic, but it is an improvement over the 2003 survey, which found that only 23 percent of students felt that way. The new survey, conducted by the research firm Ipsos, paints a cloudy picture of the software industry's antipiracy efforts: More than 60 percent of students said they rarely or never paid for commercial software. And while 44 percent of students said their campuses had official policies on downloading (up from 28 percent in the 2003 survey), there was no consensus on whether campus antipiracy tactics were effective. Critics of the alliance say the study just shows that the software industry is fighting an unpopular battle. Stephen Downes, of [90]OLDaily, has argued that the 2003 survey showed "massive support in the student population for file sharing and attitudes ranging from [91]indifference to support among the professors." The Perils of Podcasting Will Apple iTunes' [95]new podcasting venture run afoul of copyright law? It's conceivable, some experts say, depending on how the Supreme Court's Grokster decision is interpreted. The podcasting service, which made its debut on Tuesday, allows iTunes users to publish their own podcasts -- homemade radio programs that people can download automatically to their iPods or other portable MP3 players. Apple has said that it plans to monitor submitted podcasts for violations of copyright. But if some infringing material does sneak onto iTunes, the company could find itself in [96]uncharted territory, some analysts say. (Wired News) Recent Posts * [103]The Economics of Cheating * [104]Measuring 'Internet Intelligence' * [105]Justice Department Raids Piracy Dens * [106]Beep Beep * [107]Supporting Piracy or Making a Point? * [108]Ready for the Digital Revolution * [109]Peer-to-Peer, Legally * [110]Physicists Are People Too * [111]RIAA Fires Off More Antipiracy Lawsuits * [112]Song-Swapping by Subscription [113]Archives * [114]July 1, 2005 * [115]June 30, 2005 * [116]June 29, 2005 * [117]June 28, 2005 * [118]June 27, 2005 * [119]June 24, 2005 * [120]June 23, 2005 * [121]June 22, 2005 * [122]June 21, 2005 * [123]June 20, 2005 References 46. http://www.timesdispatch.com/servlet/Satellite?pagename=RTD%2FMGArticle%2FRTD_BasicArticle&c=MGArticle&cid=1031783599826&path=!news&s=1045855934842 50. http://www.newsday.com/technology/business/wire/sns-ap-internet-intelligence,0,5425147.story?coll=sns-ap-technology-headlines 54. http://www.latimes.com/technology/la-fi-piracy1jul01,1,4223112.story?coll=la-headlines-technology 55. http://wiredcampus.chronicle.com/2005/03/guilty_pleas_fo.html 59. http://chronicle.com/prm/weekly/v51/i43/43a02702.htm 63. http://web.archive.org/web/20010710021553/http://bitconjurer.org/a_technological_activists_agenda.html 64. http://www.wired.com/news/digiwood/0,1412,68046,00.html?tw=wn_tophead_2 65. http://chronicle.com/daily/2005/06/2005062801t.htm 73. http://www.pcpro.co.uk/news/74631/p2p-company-to-offer-legal-file-sharing-of-sony-material.html 74. http://www.washingtonpost.com/wp-dyn/articles/A18568-2004Dec22.html 78. http://chronicle.com/prm/weekly/v51/i43/43a01101.htm 85. http://chronicle.com/prm/weekly/v50/i37/37a03201.htm 88. http://wiredcampus.chronicle.com/2005/06/songswapping_by.html#trackback 89. http://www.bsa.org/usa/press/newsreleases/Nationwide-Survey-June-2005.cfm 90. http://www.downes.ca/news/OLDaily.htm 91. http://www.downes.ca/cgi-bin/website/find.cgi?string=site~Business%20Software%20Alliance 95. http://wiredcampus.chronicle.com/2005/06/pod_people.html 103. http://wiredcampus.chronicle.com/2005/07/the_economics_o.html 104. http://wiredcampus.chronicle.com/2005/07/the_dsl_curve.html 105. http://wiredcampus.chronicle.com/2005/07/justice_departm.html 106. http://wiredcampus.chronicle.com/2005/07/beep_beep.html 107. http://wiredcampus.chronicle.com/2005/06/supporting_pira.html 108. http://wiredcampus.chronicle.com/2005/06/ready_for_the_d.html 109. http://wiredcampus.chronicle.com/2005/06/peertopeer_lega.html 110. http://wiredcampus.chronicle.com/2005/06/physicists_are_.html 111. http://wiredcampus.chronicle.com/2005/06/riaa_fires_off_.html 112. http://wiredcampus.chronicle.com/2005/06/songswapping_by.html 113. http://wiredcampus.chronicle.com/archives.html 114. http://wiredcampus.chronicle.com/2005/07/01/index.html 115. http://wiredcampus.chronicle.com/2005/06/30/index.html 116. http://wiredcampus.chronicle.com/2005/06/29/index.html 117. http://wiredcampus.chronicle.com/2005/06/28/index.html 118. http://wiredcampus.chronicle.com/2005/06/27/index.html 119. http://wiredcampus.chronicle.com/2005/06/24/index.html 120. http://wiredcampus.chronicle.com/2005/06/23/index.html 121. http://wiredcampus.chronicle.com/2005/06/22/index.html 122. http://wiredcampus.chronicle.com/2005/06/21/index.html 123. http://wiredcampus.chronicle.com/2005/06/20/index.html E-mail me if you have problems getting the referenced articles. From checker at panix.com Wed Jul 6 21:19:41 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:19:41 -0400 (EDT) Subject: [Paleopsych] CHE: Buzzwords and Their Evolving Meanings Message-ID: Buzzwords and Their Evolving Meanings The Chronicle of Higher Education, 5.7.8 http://chronicle.com/weekly/v51/i44/44b00201.htm Fictions Peter Brooks, a professor of English and law at the University of Virginia and author of Realist Vision (Yale University Press, 2005): "Fictions" has to my mind become a crucial term for literary studies, perhaps for the humanities in general. By "fictions" I mean something other than the distinction you find at Barnes & Noble between fiction and nonfiction, though that's not irrelevant. Fictions as I want to use the term points us toward the realm of the self-consciously made-up (from fingere: to make, and to make-believe): works of the imagination that know they are that. I think I first saw the term used in this way in Frank Kermode's The Sense of an Ending (1967), one of those books that has only become more important over time. But perhaps the main source for the modern prominence of the word is Jorge Luis Borges's Ficciones (1944) -- that remarkable set of short meta-fictions, stories that comment on the very process of invention. And one could of course trace the word and concept much farther back, to Hans Vaihinger's philosophy of the "as-if," to Jeremy Bentham, to Jean-Jacques Rousseau, and so on. Kermode distinguishes between myth and fiction: Myth is a kind of degraded fiction to which individuals and cultures accord totalizing explanatory power, such as "the master race" or "the war on terror." In this sense, fiction is not opposed to reality but the condition for distinguishing the real from the cultural overlays, the ideologies and false consciousness, that mask it. As Roland Barthes taught us, that which is presented as "nature" is often simply a cultural myth. So it is not surprising that another term struggling to re-emerge after decades of eclipse is "realism" -- set in opposition to distorting fantasies. *** Sexuality Dagmar Herzog, an associate professor of history at Michigan State University and author of Sex After Fascism: Memory and Morality in Twentieth-Century Germany (Princeton University Press, 2005): "Sexuality" emerged as a keyword among historians in the mid-1970s. For the first three decades of its existence, the history of sexuality tended to be about sexual mores and practices. Among other things we learned that in 19th-century Europe it had been considered normal for middle-class men to have sex with prostitutes both before and during marriage. Infanticide had been a common strategy of family planning in early modern Europe. And same-sex activities for both men and women were considered unremarkable in 18th- and 19th-century America. In sum, we learned that the most intimate realms of human behavior have changed dramatically over time. Increasingly historians of sexuality turned their attention to the 20th century. This was the century when sexuality became ever more central to an individual's identity. It was also the century when sex became a crucial marketing tool and a major engine of economic development. In addition, laws relating to sexuality became an ever greater focus of political and cultural conflict, from abortion to homosexual rights, from pornography to sex education. Until recently, in telling the history of sexuality in the 20th century, scholars often organized their accounts as stories of progress: Things were bad, then they were better. The early 21st century, however, has seen this mood of optimism come undone. Scholars of sexuality are now seeking to make sense of the bodily and emotional dissociations resulting from psychopharmaceuticals (from Prozac to Viagra) and cybersex. They are also struggling to understand the popular appeal of religious and sexual conservatism within both Christianity and Islam. In the process, the keyword "sexuality" is losing many of its post-1960s associations with emancipatory impulses. *** Suburbia Corey Dolgon, an associate professor and chairman of sociology at Worcester State College and author of The End of the Hamptons: Scenes From the Class Struggle in America's Paradise (New York University Press, 2005): Historically, the suburbs originate in literature in the first industrial aristocracy's efforts to develop what Leo Marx called a "middle landscape, somewhere between the chaos, garbage, and immigrant-dense metropolis" and the "uncivilized, provincial, and poor countryside." But suburbanization quickly came to represent not the exurban enclaves of Long Island's Gold Coast or Philadelphia's Main Line, but the more middle-class, commuter subdivision of Levittown. This transformation chronicles what Robert Fishman called the "rise and fall" of bourgeois utopia. Like cities, the suburbs have always inspired both utopian and dystopian images. Bucolic and independent, suburbs came to embody the American dream of homeownership, good schools, clean parks, and safe, finely manicured neighborhoods. By now, however, most scholars agree that neither depiction is accurate as suburbs are commercially, demographically, and aesthetically diverse at the same time that they also suffer from social problems once associated only with the density and heterogeneity of urban centers. More people who live in suburbs actually work in suburbs. More economic growth, housing development, and retail sales now take place outside the urban core. Increasingly, immigrants bypass cities for suburban jobs in landscaping, construction, retail, and other services. Even terms like "urban sprawl" have been transformed into "suburban sprawl." Distant resort areas like Cape Cod have become year-round residences for middle- and upper-class urban refugees who today worry about chain stores, fast-food joints, and affordable tract housing mucking up their farm views. In the minds of Americans, for sure, the suburbs remain a contested cultural construction, much like the middle class itself, where generations will continue to struggle over how one inscribes the physical and cultural landscape with a mass-produced and mass-consumed vision of the "good life." *** Empire Dane Kennedy, a professor of history and international affairs at George Washington University and author of The Highly Civilized Man: Richard Burton and the Victorian World (Harvard University Press, 2005): "Empire," along with "imperialism," is one of those terms that historians can't seem to do without, but can't manage to agree upon. No sooner does one group figure they've got it contained within definitional boundaries than another group drives it in a new direction -- rather like empires themselves. Two main issues are responsible for the term's instability. The first has to do with its morality. When history developed into a modern discipline, many of its classically educated practitioners equated empire with Rome, and Rome with civilization and progress. By this standard, empire was a good thing. Others, however, associated it with the tyranny of "Oriental despots" or the military aggression of Napoleon. For them, empire was a bad thing. This dispute about the moral merits of empire has persisted to the present day. The issue of instrumentality also affects the varied uses of the term. In the estimation of some historians, empire is first and foremost a political phenomenon, involving the rule of one people over another. Others see it as essentially an instrument of economic forces, often unconstrained by any need for direct governance. Still others believe that empire's greatest significance lies in its cultural power, its ability to get into peoples' heads. For the past decade or so, advocates of the cultural approach have had a particularly good run. Doing history has always involved a dialogue between the past and the present, so it's no surprise that the U.S. invasions of Iraq and Afghanistan have intensified interest in empire among historians, who are contending anew about its instrumentality and its morality. *** Taste Denise Gigante, an assistant professor of English at Stanford University and author of Taste: A Literary History (Yale University Press, 2005): There has been a shift recently in the connotation of the cultural keyword "taste." This term was a virtual obsession in 18th-century Europe as a synonym for discernment, or aesthetic connaissance. The connoisseur was an art appreciator and a person (usually a man) of letters. The forerunner of the 20th-century literary critic was the Man of Taste. But the present shift in attention, through cultural studies, from the high arts to the low, from poetry to food and other everyday matters not associated with the patriarchal elite, has brought to light an important shift that took place at the turn of the 19th century in the discursive field of taste from the sublime to the stomach, as it were. Taste became embodied as a concept and associated more and more with the food and wine connoisseur, who showed individual distinction through fine dining. Eventually the display of savoir-faire among flavors came to assume an equal footing with -- if it did not assume cultural priority over -- what was once called a fine taste in the arts. Today taste is confounded with physical pleasure(s) to the degree that we associate gastronomy -- or an aesthetic appreciation for food -- with our popular food culture, expressing its standards and principles through gourmet magazines and journalism (restaurant reviews, televised food shows, and so forth). But a growing subfield within literary studies has grown to understand the consumerist aspect of taste as nothing other than a cultural-material expansion of the 18th-century philosophical discourse of taste. *** Rationality Edward C. Rosenthal, an associate professor of management science and operations management at the Fox School of Business and Management at Temple University and author of The Era of Choice: The Ability to Choose and Its Transformation of Contemporary Life (MIT Press, 2005): Not so long ago, we thought we knew what "rationality" was. But are we, in fact, rational beings? Try this: Would you prefer $100 right now or $110 a month from now? Would you prefer to pay a fine of $40 or else gamble on a coin flip in which you pay $100 on heads but pay nothing on tails? Many of us would select the $100 in the first scenario and gamble in the next one. Such "irrational" behavior defies conventional economic theory, and, as we are discovering, to get to the source of the problem, we need to get our heads examined -- literally. By the early 1970s, economists and decision theorists had seemingly triumphed in their quest to work out mathematical models of optimal behavior when we exchange goods with others. And since notions like supply and demand and expected utility plausibly explained much of human behavior, the assumption that we operate as Homo economicus was not unreasonable. But for 25 years now evidence has been piling up that our behavior does not always fit the models. This is not to say that we are merely rationally challenged beings. Rather, there might be a method to our madness. Foraging theory, for example, has shown that even animal behavior fits rigorous economic models. For us, perhaps risk averseness is best in certain circumstances. Perhaps emotion, not intellect, is at times the superior guide. Perhaps hot impulsiveness can be more adaptive than cool patience. As we begin to unravel the complexities of rationality, it is very exciting to track the progress being made in behavioral-decision theory, intertemporal choice, neuroeconomics, and other fields in which the goal is to redefine, rather than dismantle, the notion of humans as rational actors. From checker at panix.com Wed Jul 6 21:19:48 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:19:48 -0400 (EDT) Subject: [Paleopsych] CHE: Novel Perspectives on Bioethics (letters appended) Message-ID: CHE: Novel Perspectives on Bioethics (letters appended) The Chronicle of Higher Education, 5.5.13 http://chronicle.com/weekly/v51/i36/36b00601.htm By MARTHA MONTELLO On March 16, the Kansas Legislature heatedly debated a bill that would criminalize all stem-cell research in the state. Evangelical-Christian politicians and conservative lawmakers argued with molecular biologists and physicians from the University of Kansas' medical school about the morality of therapeutic cloning. Up against a substantial audience of vocal religious conservatives, William B. Neaves, CEO and president of the Stowers Institute for Medical Research, a large, privately financed biomedical-research facility in Kansas City, began his impassioned defense of the new research by giving his credentials as "a born-again Christian for 30 years." Barbara Atkinson, executive vice chancellor of the University of Kansas Medical Center, tried to articulate the difference between "a clump of cells in a petri dish" and what several hostile representatives repeatedly interrupted to insist is "early human life." Clearly, in this forum, language mattered. Each word carried wagonloads of moral resonance. I am a literature professor. I was at the hearing because I am also chairwoman of the pediatric-ethics committee at the University of Kansas Medical Center. I listened to the debates get more and more heated as the positions got thinner and more polarized, and I kept thinking that these scientists and lawmakers needed to read more fiction and poetry. Leon R. Kass, chairman of the President's Council on Bioethics, apparently feels the same way. He opened the council's first session by asking members to read Hawthorne's story "The Birthmark,"and he has since published an anthology of literature and poetry about bioethics issues. The fight in Kansas (the bill was not put to a vote) is in some ways a microcosm of what has been happening around the country. From Kevorkian to Schiavo, cloning to antidepressants, issues of bioethics increasingly underlie controversies that dominate public and political discussion. Decisions about stem-cell research, end-of-life choices, organ transplantation, and mind- and body-enhancing drugs, among others, have become flash points for front-page news day after day. At the same time, some good literary narratives have emerged over the past few years that reveal our common yet deeply individual struggles to find an ethics commensurate with rapid advances in the new science and technologies. Kazuo Ishiguro's elegiac, disturbing new novel, Never Let Me Go, re-imagines our world in a strange, haunting tale of mystery, horror, love, and loss. Set in "England, 1990s," the story is pseudohistorical fiction with a hazy aura of scientific experimentation. A typical Ishiguro narrator, Kathy H. looks back on her first three decades, trying to puzzle out their meaning and discern the vague menace of what lies ahead. In intricate detail she sifts through her years at Hailsham, an apparently idyllic, if isolated, British boarding school, "in a smooth hollow with fields rising on all sides." Kathy and the other students were nurtured by watchful teachers and "guardians," who gave them weekly medical checks, warned them about the dangers of smoking, and monitored their athletics triumphs and adolescent struggles. Sheltered and protected, she and her friends Ruth and Tommy always knew that they were somehow special, that their well-being was important to the society somewhere outside, although they understood that they would never belong there. From the opening pages, a disturbing abnormality permeates their enclosed world. While the events at Hailsham are almost absurdly trivial -- Tommy is taunted on the soccer field, Laura gets caught running through the rhubarb garden, Kathy loses a favorite music tape -- whispered secrets pass among guardians and teachers, and the atmosphere is ominous -- as Kathy puts it, "troubling and strange." The children have no families, no surnames, no possessions but castoffs -- other people's junk. Told with a cool dispassion through a mist of hints, intuitions, and guesses, Kathy's memories gradually lift the veil on a horrifying reality: These children were cloned, created solely to become organ donors. Once they leave Hailsham (with its Dickensian reverberations of Havisham, that ghostly abuser of children) they will become "caregivers," then "donors," and if they live to make their "fourth donation," will "complete." The coded language that Kathy has learned to describe her fate flattens the unthinkable and renders it almost ordinary, simply what is, so bloodlessly that it heightens our sense of astonishment. What makes these doomed clones so odd is that they never try to escape their fate. Almost passive, they move in a fog of self-reinforced ignorance, resigned to the deadly destiny for which they have been created. However, in a dramatic scene near the end of the novel, Kathy and Tommy do try to discover, from one of the high-minded ladies who designed Hailsham, if a temporary "deferral" is possible. It is too late for any of them now, the woman finally divulges. Once the clones were created, years ago during a time of rapid scientific breakthroughs, their donations became the necessary means of curing previously incurable conditions. Society has become dependent on them. Now there is no turning back.The only way people can accept the program is to believe that these children are not fully human. Although "there were arguments" when the program began, she tells them, people's primary concern now is that their own family members not die from cancer, heart disease, diabetes, or motor-neuron diseases. People outside prefer to believe that the transplanted organs come from nowhere, or at least from beings less than human. Readers of Ishiguro's fiction will recognize his mastery in creating characters psychologically maimed by an eerie atrocity. From his debut novel, A Pale View of Hills (Putnam, 1982), Ishiguro's approach to horror has been oblique, restrained, and enigmatic. The war-ravaged widow from Nagasaki in that work presages the repressed English butler of The Remains of the Day (Random House, 1990) and Kathy herself, all long-suffering victims with wasted lives whose sense of obligation robs them of happiness. Their emotions reined in, their sight obscured, they are subject to wistful landscapes, long journeys, and a feeling of being far from the possibility of home and belonging. Never Let Me Go, however, ventures onto new terrain for Ishiguro by situating itself within current controversies about scientific research. Taking on some of the moral arguments about genetic engineering, the novel inevitably calls into question whether such fiction adds to the debates or clouds them -- and whether serious fiction about bioethics is enriched by the currency of its topic or hampered by it. Here Ishiguro's novel joins company with others that are centered in contemporary bioethics issues and might be considered a genre of their own. A decade ago, Doris Betts penetrated the intricate emotions around living donors' organ transplantation in her exquisitely rendered Souls Raised From the Dead. The novel offered a human dimension and nuanced depth to this area of medical-ethics deliberations, which were making headline news. In Betts's story, a dying young daughter needs as close a match as possible for a new kidney. Her parents face complexities and contradictions behind informed consent and true autonomy that are far more subtle, wrenching, and real than any medical document or philosophy-journal article can render. Betts does justice to the medical and moral questions surrounding decisions that physicians, patients, and families must make regarding potential organ donations. What makes the book so compelling, though, is its focus on the various and often divergent emotional strategies that parents and children use to cope with fear, sacrifice, and impending loss. The 13-year-old Mary Grace, her parents, and grandparents reveal themselves as fully rounded, noninterchangeable human beings who come to their decisions and moral understandings over time, within their own unique personal histories and relationships with each other. As the therapeutic possibilities of transplant surgery were breaking new ground in hospitals across the country, surgeons, families, and hospital ethics committees grappled with dilemmas about how to make good choices between the medical dictum to "do no harm" and the ethical responsibility to honor patients' sovereignty over their own bodies. Betts's novel captured the difficulty of doing the right thing for families enduring often inexpressible suffering: How much sacrifice can we expect of one family member to save another? The ethical complexities regarding organ donations, and particularly the dilemmas associated with decisions to conceive children as donors, are escalating. Four years ago The New York Times reported on two families who each conceived a child to save the life of another one. Fanconi anemia causes bone-marrow failure and eventually leukemia and other kinds of cancer. Children born with the disease rarely live past early childhood. Their best chance of survival comes from a bone-marrow transplant from a perfectly matched sibling. Many Fanconi parents have conceived another child in the hope that luck would give them an ideal genetic match. These two couples, however, became the first to use new reproductive technologies to select from embryos resulting from in vitro fertilization, so they could be certain that this second baby would be a perfect match. When the article appeared in the Times, many people wondered if it is wrong to create a child for "spare parts." News reports conjured up fears of "Frankenstein medicine." State and federal legislatures threatened laws to ban research using embryos. A fictional version of this dilemma appears in Jodi Picoult's novel My Sister's Keeper. Picoult, a novelist drawn to such charged topics as teen suicide and statutory rape, takes up this bioethics narrative of parents desperate to save a sick child through the promise of genetic engineering. Conceived in that way, Anna Fitzgerald has served since her birth as the perfectly matched donor for her sister, Kate, who has leukemia, supplying stem cells, bone marrow, and blood whenever needed. Now, though, as her sister's organs begin to fail, the feisty Anna balks when she is expected to donate a kidney. Through alternating points of view, Picoult exposes the family's moral, emotional, and legal dilemmas, asking if it can be right to use -- and perhaps sacrifice -- one child to save the life of another. The story draws the reader in with its interesting premise -- one sister's vital needs pitted against the other's -- but ultimately disintegrates within a melodramatic plot that strands its underdeveloped characters. Why is the girls' mother so blind and deaf to Anna's misgivings about her role as donor? How can we possibly believe the contrived ending, which circumvents the parents' need to make a difficult moral choice? Ultimately the novel trivializes what deserves to be portrayed as a profoundly painful Sophie's choice, using the contentious bioethics issue as grist for a kind of formulaic writing. While authors like Betts and Picoult have examined ethical dilemmas of the new science in a style that might be called realistic family drama, others lean toward science fiction, imagining dystopian futures that are chillingly based on the present. Often prescient, they reflect our unarticulated fears, mirroring our rising anxiety about where we are going and who we are becoming. In addressing concerns about cloning, artificial reproduction, and organ donation, these novels join an even broader, older genre, the dystopian novels of the biological revolution. In 1987 Walker Percy published The Thanatos Syndrome, a scathing fictional exploration of what the then-new psychotropic drugs might mean to our understanding of being human. In this last and darkest novel by the physician-writer, the psychiatrist Tom More stumbles on a scheme to improve human behavior by adding heavy sodium to the water supply. After all, the schemers argue, what fluoride has done for oral hygiene, we might do for crime, disease, depression, and poor memory! More is intrigued but ultimately aghast at the consequences: humans reduced to lusty apes with no discernible soul or even self-consciousness. Percy cleverly captures many of our qualms about such enhancement therapies in a fast-paced plot that reads like a thriller. Many readers, however, feel that this sixth and final novel is the least compelling of Percy's oeuvre, emphasizing his moral outrage over the excesses of science at the expense of a protagonist's spiritual and emotional journey that had previously been the hallmark of his highly acclaimed fiction. With less dark humor but equal verve, Margaret Atwood's Oryx and Crake chronicles the creation of a would-be paradise shaped and then obliterated by genetic manipulation. Echoes of her earlier best seller The Handmaid's Tale (Houghton Mifflin, 1986) reverberate through this postapocalyptic world set in an indeterminate future, where Snowman, the proverbial last man alive, describes how the primal landscape came to be after the evisceration of bioengineering gone awry. A modern-day Robinson Crusoe, Snowman is marooned on a parched beach, stranded between the polluted water and a chemical wasteland that has been stripped of humankind by a virulent plague. Once he melts away, even the vague memories of what was will have disappeared. As in other works of science fiction, while its plot complications drive the narrative, its powerful conceptual framework dominates the stage. For all it lacks in character complexity and realistic psychological motivations, this 17th book of Atwood's fiction has a captivating Swiftian moral energy, announced in the opening quotation, from Gulliver's Travels: "My principal design was to inform you, and not to amuse you."Readers, however, might wish that Atwood had made a stronger effort to amuse us. Her ability to sustain our interest is challenged by the story's unremitting bleakness and the lack of real moral depth to its few characters. Even with its weaknesses, Atwood's is a powerful cautionary tale, similar in some ways to Caryl Churchill's inventive play A Number (2002). That drama is constructed of a series of dialogues in which a son confronts his father (named Salter) with the news that he is one of "a number" of clones (all named Bernard). Years ago, grieving the death of his wife, Salter was left to raise a difficult son, lost to him in some deep way, whom he finally put into "care." Sometime later, wanting a replacement for the lost son, he had the boy cloned. Without his knowledge, 19 others were created, too. Now Salter hears not only the emotional pain and anger of his original troubled son, but also the harrowing psychological struggles of several of the cloned Bernards. Salter responds with a mix of anguish and resignation as he faces the consequences of decisions he once made without much thought. This strange play winds through an ethical maze as each of the characters desperately tries to come to some livable terms with what genetic engineering has wrought. The drama is inventive in both its staccato elliptical dialogues and its sheer number of existential and ethical ideas. In the end, though, the characters never emerge as human, never engage us sufficiently to make us care about their ordeals with selfhood and love. When Salter says to one of the cloned Bernards, "What they've done they've damaged your uniqueness, weakened your identity," it is difficult to believe that they were ever capable of possessing either. Although Churchill's nightmare may seem especially odd, her tale of violence, deception, and loss resonates with those of Betts, Ishiguro, and Picoult. What if you might lose your child? If the means were available, would you take any chance, do anything, to save her? Or, if lost to you, to bring him back? All of these stories have in common their underlying questions about where bioengineering is leading us, what kinds of choices it asks us to make, and where the true costs and benefits lie. What makes the stories different from other forms of ethical inquiry is their narrative form, their way of knowing as literature. John Gardner reminds us that novels are a form of moral laboratory. In the pages of well-written fiction, we explore the way a unique human being in a certain set of circumstance makes moral decisions and lives out their consequences. Some of the novels being written now offer valuable cautionary tales about what is at stake in our current forays into new science and technology, asking us, as Ishiguro does in Never Let Me Go, What is immutable? What endures? What is essential about being human? Where does the essential core of identity lie? Does it derive from nature or nurture, from our environment or genetics? But the best go further. As Ishiguro's does, they take the bioethics issue as a fundamental moral challenge. Instead of using an aspect of bioethics as an engine to drive the plot, some authors succeed in using it as a prism that shines new light onto timeless questions about what it means to be fully human. At its heart, Ishiguro's tale has very little to do with the specific current controversies over cloning or genetic engineering or organ transplantation, any more than The Remains of the Day has to do with butlering or A Pale View of Hills has to do with surviving the atomic bomb. By the end of the novel, we discover that Never Let Me Go is, if cautionary, also subtler and more subversive than we suspected. Tommy and Ruth are already gone, and Kathy herself is ready to begin the "donations" that will lead to her own "completion." During one of her long road trips, she stops the car for "the only indulgent thing" she's ever done in a life defined by duty and "what we're supposed to be doing." Looking out over an empty plowed field, just this once she allows herself to feel an inkling of what she's lost and all she will never have. At this moment, we realize ourselves in Kathy, and we see her foreshortened and stunted life as not so very different from our own. The biological revolution's greatest surprise of all may be that its dilemmas are not really new. Instead, it may simply deepen the ones we've always faced about how to find meaning in our own lives and the lives of others. Martha Montello is an associate professor in the department of history and philosophy of medicine and director of the Writing Resource Center in the School of Medicine at the University of Kansas. She also lectures on literature and ethics at the Harvard-MIT Division of Health Sciences & Technology, and co-edited Stories Matter: The Role of Narrative in Medical Ethics (Routledge, 2002). WORKS DISCUSSED IN THIS ESSAY My Sister's Keeper, by Jodi Picoult (Atria, 2004) Never Let Me Go, by Kazuo Ishiguro (Knopf, 2005) A Number, by Caryl Churchill (a 2002 play published by Theatre Communications Group in 2003) Oryx and Crake, by Margaret Atwood (Nan A. Talese, 2003) Souls Raised From the Dead, by Doris Betts (Knopf, 1994) The Thanatos Syndrome, by Walker Percy (Farrar, Straus and Giroux, 1987) ------------------- Does It Make Sense to Use Fiction as a Guide to Bioethics? The Chronicle of Higher Education, 5.7.8 http://chronicle.com/weekly/v51/i44/44b01301.htm To the Editor: In "Novel Perspectives on Bioethics" (The Review, May 13), Martha Montello discusses the treatment of bioethical issues in literary works that "lean toward science fiction" without discussing any examples from science fiction itself, which offers interesting, incisive considerations of bioethical concerns in abundance. Examples include Nancy Kress's 1993 Beggars in Spain, about genetic engineering, and Nancy Farmer's 2002 young-adult novel about cloning, The House of the Scorpion. These books and many other examples of science fiction -- contrary to Montello's implication that science fiction is a lesser literature dominated by plot and concept -- shine, like the works she discusses, "new light onto timeless questions about what it means to be fully human" through three-dimensional characters and well-crafted prose. As I argue in my forthcoming book, Understanding Contemporary American Science Fiction: The Age of Maturity, 1970-2000, the traditional disdain for science fiction among many literary scholars should not blind readers to its virtues -- including what it has to contribute to discussions about how science affects our lives. Darren Harris-Fain Associate Professor of English Shawnee State University Portsmouth, Ohio *** To the Editor: I read with some measure of consternation Martha Montello's essay, in which she advises those Christian fundamentalists and their opponents who have been debating bioethical issues in the Kansas Legislature to read more fiction and poetry in order to refine their ethical barometers. Although the proposal is a sound one, the reasons Professor Montello adduces -- viz., that many works of fiction, a few of which she discusses in some detail, address bioethical issues in a thoughtful and sensitive fashion -- are not at all compelling. In fact, resorting to works of art to gain knowledge about how bioethical controversies should be resolved makes Plato's decision to exile the poets seem appealing. Great works of art are vivid and moving, but they necessarily lack the systematic approach and appropriately cold-blooded rationality that other disciplines, such as ethical philosophy, possess in their best moments. Kazuo Ishiguro's portrait of Kathy H. in Never Let Me Go may be compelling as an aesthetic matter, but anyone who concludes that cloning is a horrific enterprise on the strength of that portrait is allowing an emotional reaction to a piece of science fiction to interfere with the logical scrutiny of reality. Great literature may be instructive, but not like this. One should not turn to Ishiguro to discern bioethical truths any more than one would turn to Shakespeare's history plays to learn British history. Rather, such literature teaches us how to engage our world in a subtler, more nuanced fashion, to make fine distinctions in lieu of crude, categorical ones. And it cultivates in us the skill of textual interpretation, a significant ability in a forum such as the Kansas Legislature, where, as Montello observes, "language mattered." It teaches those who would read biblical texts in a fundamentalist fashion about the use of tropes and other literary devices that permit nonliteral readings of texts and that allow the reader to get beyond the dangerous, antipodean view that such works are either factually true or else false. After all, as Nietzsche explained in responding to Plato, art is the one thing that is true because it treats appearance as appearance; its aim is precisely not to deceive. To use art as a source of philosophical or factual truths, as Professor Montello proposes, therefore, is no less an abuse than the manner in which religious conservatives seek to employ the Bible to shape their beliefs about a world far too nuanced to be contained by its well-worn pages. Alex Zubatov New York * * * To the Editor: I wish I could be more sanguine that the study of literature leads to a better bioethics. Not that I'm a disbeliever in literature. I've written two novels (that never found their way into print but that were popular with my friends, or so they diplomatically said). ... But contrary to her purpose, Martha Montello's tour of contemporary fiction shows that literature takes bioethics only so far. ... There are plenty of literary potboilers that don't help bioethics at all, and it's an open question whether fiction overall enlightens more than it stultifies. Montello says that fiction reveals "our common yet deeply individual struggles to find an ethics commensurate with rapid advances in the new science and technologies." Maybe, but there's still a step missing: How do works of imagination improve either the framing or the resolution of legal and policy decisions? ... I would be better persuaded of the value of fiction and poetry to bioethics if Montello showed how this literature actually helped answer important questions of the day, in ways that were superior to the answers arrived at without reference to someone's idea of the literary canon. Timothy F. Murphy Professor of Philosophy in the Biomedical Sciences College of Medicine University of Illinois Chicago From checker at panix.com Wed Jul 6 21:20:58 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:20:58 -0400 (EDT) Subject: [Paleopsych] Telegraph: Susan Blackmore: I take illegal drugs for inspiration Message-ID: http://www.susanblackmore.co.uk/journalism/telegraphdrugs.htm Daily Telegraph, Saturday May 21st 2005, pp 17-18 (Note: This version is very slightly different from the published, edited, version) Every year, like a social drinker who wants to prove to herself that she's not an alcoholic, I give up cannabis for a month. It can be a tough and dreary time - and much as I enjoy a glass of wine with dinner, alcohol cannot take its place. Some people may smoke dope just to relax or have fun, but for me the reason goes deeper. In fact, I can honestly say that without cannabis, most of my scientific research would never have been done and most of my books on psychology and evolution would not have been written. Some evenings, after a long day at my desk, I'll slip into the bath, light a candle and a spliff, and let the ideas flow - that lecture I have to give to 500 people next week, that article I'm writing for New Scientist, those tricky last words of a book I've been working on for months. This is the time when the sentences seem to write themselves. Or I might sit out in my greenhouse on a summer evening among my tomatoes and peach trees, struggling with questions about free will or the nature of the universe, and find that a smoke gives me new ways of thinking about them. Yes, I know there are serious risks to my health, and I know I might be caught and fined or put in prison. But I weigh all this up, and go on smoking grass. For both individuals and society, all drugs present a dilemma: are they worth the risks to health, wealth and sanity? For me, the pay-off is the scientific inspiration, the wealth of new ideas and the spur to inner exploration. But if I end up a mental and physical wreck, I hereby give you my permission to gloat and say: "I told you so". My first encounter with drugs was a joint shared with a college friend in my first term at Oxford. This was at the tail end of the days of psychedelia and flower power - and cannabis was easy to obtain. After long days of lectures and writing essays, we enjoyed the laughter and giggling, the heightened sensations and crazy ideas that the drug seemed to let loose. Then, one night, something out of the ordinary happened - though whether it was caused by the drug, lack of sleep or something else altogether, I don't know. I was listening to a record with two friends, sitting cross-legged on the floor, and I had smoked just enough to induce a mild synaesthesia. The sound of the music had somehow induced the sensation of rushing through a long, dark tunnel of rustling leaves towards a bright light. I love tunnels. They come on the verges of sleep and death and are well known in all the cultures that use drugs for ritual, magic or healing. The reason for them lies in the visual cortex at the back of the brain, where certain drugs interfere with the inhibitory systems, releasing patterns of circles and spirals that form into tunnels and lights. I didn't know about the science then. I was just enjoying the ride, when one of my friends asked a peculiar question: "Where are you, Sue?". Where was I? I was in the tunnel. No, I was in my friend's room. I struggled to answer; then the confusion cleared and I was looking down on the familiar scene from above. "I'm on the ceiling, " I said, as I watched the mouth down below open and close and say the words in unison. It was a most peculiar sensation. My friend persisted. Can you move? Yes. Can you go through the walls? Yes. And I was off exploring what I thought, at the time, was the real world. It was a wonderful feeling - like a flying dream, only more realistic and intense. The experience lasted more than two hours, and I remember it clearly even now. Eventually, it came to seem more like a mystical experience in which time and space had lost their meaning and I appeared to merge with the universe. Years later, when I began research on out-of-body and near-death experiences, I realised that I'd had all those now-familiar sensations that people report after close brushes with death. And I wanted to find out more. However, nothing in the physiology and psychology that I was studying could remotely begin to cope with something like this. We were learning about rats' brains, and memory mechanisms, not mind and consciousness - let alone a mind that could apparently leave its body and travel around without it. Then and there, I decided to become a parapsychologist and devote my life to proving all those closed-minded scientists wrong. But I was the one who was wrong. I did become a parapsychologist, but decades of difficult research taught me that ESP almost certainly doesn't exist and that nothing leaves the body during an out-of-body experience - however realistic it may feel. Although parapsychology gave me no answers, I was still obsessed with a scientific mystery: how can we explain the mind and consciousness from what we know about the brain? Like any conventional scientist, I carried out experiments and surveys and studied the latest developments in psychology and neuroscience. But since the object of my inquiry was consciousness itself, this wasn't enough. I wanted to investigate my own consciousness as well. So I tried everything from weird machines and gadgets to long-term training in meditation - but I have to admit that drugs have played a major role. Back in those student days, it was the hallucinogens, or "mind-revealing" psychedelics, that excited us - and the ultimate hallucinogen must be LSD. Effective in minuscule doses, and not physically addictive, LSD takes you on a "trip" that lasts about eight to 10 hours but can seem like forever. Every sense is enhanced or distorted, objects change shape and form, terrors flood up from your own mind, and you can find joy in the simplest thing. Once the trip has begun, there is no escape - no antidote, no way to stop the journey into the depths of your own mind. In my twenties, I used to take acid two or three times a year - and this was quite enough, for an acid trip is not an adventure to be undertaken lightly. I've met the horrors with several hallucinogens, including magic mushrooms that I grew myself. I remember once gazing at a cheerfully coloured cushion, only to see each streak of colour turn into a scene of rape, mutilation or torture, the victims writhing and screaming - and when I shut my eyes, it didn't go away. It is easy to understand how such visions can turn into a classic "bad trip" , though that has never happened to me. Instead, the onslaught of images eventually taught me to see and accept the frightening depths of my own mind - to face up to the fact that, under other circumstances, I might be either torturer or tortured. In a curious way, this makes it easier to cope with the guilt, fear or anxiety of ordinary life. Certainly, acceptance is a skill worth having - though I guess there are easier ways of acquiring it. Then there's the fun and just the plain strangeness of LSD. On one sunny trip in Oxford, my friend and I stopped under a vast oak tree where the path had been trampled into deep furrows by cattle and then dried solid by the hot weather. We must have spent an hour there, gazing in wonder at the texture of this dried mud; at the hills and valleys in miniature; at the hoof-shaped pits and sharp cliffs; at the shifting patterns in the dappled shade. I felt that I knew every inch of this special place; that I had an intimate connection with the mud. Suddenly, I noticed a very old man with a stick, walking slowly towards us on the path. Keep calm, I told myself. Act normal. He'll just say hello, walk by, and be gone. "Excuse me, young lady," he said in a cracked voice. "My eyes are weak and, in this light, I can't see my way. Would you help me across?" And so it was that I found myself, dream-like, guiding the old man slowly across my special place - a patch of mud that I knew as well as my own features. Two days later, my friend came back from lectures, very excited. "I've seen him. The man with the stick. He's real!" We both feared that we'd hallucinated him. Aldous Huxley once said that mescaline opened "the doors of perception"; it certainly did that for me. I took it one day with friends in the country, where we walked in spring meadows, identified wild flowers, marvelled over sparkling spider's webs and gasped at the colours in the sky that rippled overhead. Back at the farmhouse, I sat playing with a kitten until kitten and flowers seemed inextricable. I took a pen and began to draw. I still have that little flower-kitten drawing on my study wall today. On another wall is a field of daffodils in oils. One day, many years later, I went to my regular art class the day after an LSD trip. The teacher had brought in a bunch of daffodils and given us one each, in a milk bottle. Mine was beautiful; but I couldn't draw just one. My vision was filled with daffodils, and I began to paint, in bold colours, huge blooms to fill the entire canvas. I will never be a great painter but, like many artists through the ages, I had found new ways of seeing that were induced by a chemical in the brain. So can drugs be creative? I would say so, although the dangers are great - not just the dangers inherent in any drug use, but the danger of coming to rely on them too much and of neglecting the hard work that both art and science demand. There are plenty of good reasons to shun drug-induced creativity. Yet, in my own case, drugs have an interesting role: in trying to understand consciousness, I am taking substances that affect the brain that I'm trying to understand. In other words, they alter the mind that is both the investigator and the investigated. Interestingly, hallucinogens such as LSD and psilocybin are the least popular of today's street drugs - perhaps because they demand so much of the person who takes them and promise neither pleasure or cheap happiness. Instead, the money is all in heroin, cocaine and other drugs of addiction. I have not enjoyed my few experiences with cocaine. I don't like the rush of false confidence and energy it provides - partly because that's not what I'm looking for and partly because I've seen cocaine take people over and ruin their lives. But many people love it - and the dealers get rich on getting people hooked. This is tragic. In just about every human society there has ever been, people have used dangerous drugs - but most have developed rituals that bring an element of control or safety to the experience. In more primitive societies, it is shamans and healers who control the use of dangerous drugs, choose appropriate settings in which to take them and teach people how to appreciate the visions and insights that they can bring. In our own society, criminals control all drug sales. This means that users have no way of knowing exactly what they are buying and no-one to teach them how to use these dangerous tools. I have been lucky with my own teachers. The first time I took ecstasy, for example, I was with three people I had met at a Norwegian conference on death and dying. It was mid-summer, and they had invited me to join them on a trip around the fjords. One afternoon, we sat together and took pure crystals of MDMA - nothing like the frightening mixtures for sale on the streets today. MDMA has the curious effect of making you feel warm and loving towards everyone and everything around you: within a few short hours, we were all convinced that we knew each other in a deep and intimate way. Then we deliberately each set off alone to walk in the mountains, where the same feeling of love now seemed to encompass the entire landscape. I was told then that I should make the most of my first few experiences with MDMA because, after five or six doses, I would never get the same effects again. In my experience, this has been true, although prohibition makes it all but impossible to find such things out. In fact, we know horrifyingly little about the psychological effects of drugs that people take every day in Britain because scientists are not allowed to carry out the necessary research. That is why I've had to do my own. I once had an expert friend inject me with a high dose of ketamine because I had heard it could induce out-of-body experiences. Known as K, or Special K, on the street, this is an anaesthetic used more often by vets than anaesthetists because of its unpleasant tendency to produce nightmares. Get the dose right, as I did, and you are completely paralysed apart from the ability to move your eyes. This is not very pleasant. However, by imagining I was lifting out of my body, I felt I could fly, and I set off home to see what my children were up to. I was sure that I saw them playing in the kitchen; but when I checked the next day, I was told they had been asleep. Back in the room, my guide began holding up his fingers out of my line of vision and, as soon as my mouth started working again, made me guess how many. I seemed to see the fingers all right, but my guesses were totally wrong. I didn't repeat the experiment. It was not nearly as interesting as those drugs, such as LSD, psilocybin, DMT or mescaline, that undermine everything you take for granted. These are psychedelics that threaten our ordinary sense of self, and that is where they touch most deeply on my scientific interests. What is a self? How does the brain create this sense of being "me", inside this head, looking out at the world, when I know that behind my eyes there are only millions of brain cells - and nowhere for an inner self to hide? How can those millions of brain cells give rise to free will when they are merely physical and chemical machines? In threatening our sense of self, could it be that these drugs reveal the scary truth that there is no such thing? Mystics would say so. And, here, we hit an old and familiar question: do drugs and mystical experiences lead to the same "insights"? And are those insights true? Since those first trips, I have taken many other drugs - such as nitrous oxide, or laughing gas. For just a few moments, I have understood everything - "Yes, yes, this is so right, this is how it has to be" - and then the certainty vanishes and you cannot say what you understood. When the discoverer of nitrous oxide, Sir Humphrey Davy, took it himself in 1799, he exclaimed: "Nothing exists but thoughts". Others, too, have found their views profoundly shifted. It seems quite extraordinary to me that so simple a molecule can change one's philosophy, even for a few moments, yet it seems it can. Why does the gas make you laugh? Perhaps it is a reaction to a brief appreciation of that terrifying cosmic joke - that we are just shifting patterns in a meaningless universe. Are drugs the quick and dirty route to insight? I wanted to try the slow route, too. So I have spent more than 20 years training in meditation - not joining any cult or religion but learning the discipline of steadily looking into my own mind. Gradually, the mind calms, space opens up, self and other become indistinguishable, and desires drop away. It's an old metaphor, but people often liken the task to climbing a mountain. The drugs can take you up in a helicopter to see what's there, but you can't stay. In the end, you have to climb the mountain yourself - the hard way. Even so, by giving you that first glimpse, the drugs may provide the inspiration to keep climbing. Psychologist Susan Blackmore, neuro-scientist Colin Blakemore and author Mike Jay will be appearing at the Cheltenham Science Festival (June 8-12) to discuss whether drugs can teach us anything about ourselves. For tickets to the Altered States session at the town hall ( ?6, 4pm on Saturday, June 11) or for any other festival event , please call 01242 227 979 (information: www.cheltenhamfestivals.org.uk) From waluk at earthlink.net Wed Jul 6 23:04:03 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Wed, 06 Jul 2005 16:04:03 -0700 Subject: [Paleopsych] stereotypes In-Reply-To: <20050706195240.68173.qmail@web30808.mail.mud.yahoo.com> References: <20050706195240.68173.qmail@web30808.mail.mud.yahoo.com> Message-ID: <42CC6363.3060508@earthlink.net> Michael Christopher wrote: >>Germans without antisemitism in their families would still have known to avoid shopping at Jewish stores or being seen associating with Jews, picking up on their "marked" status and fearing the consequences of associating with a scapegoated class. >> Gerry replies: Since my husband is from a tightly knit German family (both parents were of German heritage) I've know lots of ethnic Germans and never have I ever seen or heard any of them refuse to shop at "Jewish stores" or hang out with Jews. One possible explanation could be that very few shops in California are designated as "Jewish" the way they are in the greater Boston area. I fear that what you are conveying could be another "urban legend". Gerry From checker at panix.com Thu Jul 7 14:47:47 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:47:47 -0400 (EDT) Subject: [Paleopsych] NYT: How Much Is Nature Worth? For You, $33 Trillion Message-ID: How Much Is Nature Worth? For You, $33 Trillion The New York Times, 97.5.20 [Note the date. I found this when doing some Spring cleaning. No, I won't say which spring!] By WILLIAM K. STEVENS ???HOW much is nature worth? ???Some say the question is unanswerable, that it is impossible to calculate a dollar value for the natural world. Others say the question should not even be asked; that nature, like human life, is priceless and should not be devalued as if it were a mere commodity. ??? But economists and ecologists are searching for the answer anyway. Nature performs valuable, practical, measurable functions, they say; without them the human economy could not exist, and in many cases people could not duplicate them as cheaply -- or at all. And they say it is time that the value of these functions is considered when economic decisions are made. ???One notable example of nature's economic value that they cite is the purification of New York City's water supply by microorganisms as the water percolates through the soil of the Catskills. The city plans to spend $660 million to preserve that watershed in good health; the alternative, a water treatment plant, would have cost $4 billion to build. ???Nature performs a long list of other economic services as well. Flood control, soil formation, pollination, food and timber production, provision of the raw material for new medicines, recreational opportunities and the maintenance of a favorable climate are among them. ???But like a well that is taken for granted until it runs dry, these ecosystem services, as ecologists call them, have long been overlooked until they either no longer work or are gone -- as, for instance, when the widespread destruction of Midwestern wetlands meant they could no longer perform their natural function of sponging up water from disastrous floods like those of recent years. ???And to the extent ecosystem services are noticed at all, people have tended to regard them as free. ???Now, as human activity gradually uses up or destroys this natural capital and eats away at the natural systems that provide many of the services, many experts are insisting that the worth of ecosystem services must be calculated and heeded. ???The results of the latest and in some ways the most ambitious effort to place a dollar value on natural capital and services were announced last week. ???Thirteen ecologists, economists and geographers, in a report in the journal Nature, estimated the present global value of 17 ecosystem services at $16 trillion to $54 trillion a year, with a likely figure of at least $33 trillion. Most of this, they said, lies outside formal markets and is therefore not reflected in market prices, the customary gauge of economic value. Their estimate, they said, compares with $18 trillion for the gross national product of the world, which is all the goods and services produced by people each year. ???The researchers, who based their conclusion on other published studies and their own calculations, freely point out that their estimate is a rough approximation, a first step that is mainly intended to determine whether ecosystem services amount to "big potatoes or small potatoes," in the words of Dr. Robert Costanza, an ecological economist at the University of Maryland who headed the study. "We come away from this thinking this is a minimum estimate," Dr. Costanza said. ???Virtually everyone agrees that without the natural world, the human economy and indeed human life could not exist. In this sense, the value of nature is infinite, immeasurable. To some conservationists, this is all that needs to be said. "Common sense and what little we have left of the wisdom of our ancestors tells us that if we ruin the earth, we will suffer grievously," said Dr. David Ehrenfeld, a conservation biologist at Rutgers University. He said he accepted the results of the Costanza study, which he regards as conservative, but added: "I am afraid that I don't see much hope for a civilization so stupid that it demands a quantitative estimate of the value of its own umbilical cord." ???Dr. Ehrenfeld and some other conservationists believe that moral arguments for saving nature are more persuasive than economic ones. But in the view of Dr. Costanza and others, moral and economic arguments should be pursued in parallel. ???People make economic choices involving nature all the time, according to this view, but they do so without taking all the costs into account. For example, the dollar value of a wetland's flood-protection and water cleansing abilities has not traditionally been considered when it is lost to a shopping center. The result is a creeping depletion of natural wealth. ???If such costs were reflected in day-to-day transactions, these theorists say, society would pay more attention to what is lost when land is "developed." ???"We can't wait until we've disrupted the planet's life-support system beyond repair," said Dr. Gretchen C. Daily, a conservation biologist at Stanford University. She is the editor of a recent collection of papers on the subject, "Nature's Services; Societal Dependence on Natural Ecosystems," published as a book by Island Press. Once gone, she noted, many of these ecological assets would be difficult, if not impossible, to replace; it can take thousands of years to recharge depleted aquifers or replenish topsoil. ???Until now, fledgling efforts at what is called "green accounting" have been pursued largely at the national level. In a widely applauded attempt of this kind, Dr. Robert Repetto, senior economist at the World Resources Institute, a Washington-based research organization, has analyzed the economies of Indonesia and Costa Rica. In Indonesia, he and colleagues calculated that losses from soil erosion reduced the net value of crops by about 40 percent and that the loss of value from deforestation was four times as high as the value of the timber extracted. They also concluded that depletion of Costa Rica's soils, forests and fisheries resulted in a 25 percent to 30 percent reduction in potential economic growth. ???A nascent effort to introduce a measure of natural-resource accounting into the United States' official calculation of economic worth was made in 1993, but it is on hold pending a Congressionally ordered study of the soundness of the approach by the National Academy of Sciences. A report is due this year. ???The new Costanza study is not really an exercise in green accounting, and some experts question its practical usefulness while others express skepticism about its basic finding. ???"There's no way of knowing how good this number is," Dr. Repetto said of the study's estimate of $33 trillion for the global value of ecosystem services. "They've made some heroic assumptions. I suppose it's useful for rhetorical purposes." But the number, he said, is less important than the fundamental point made by the study "that ecosystem services are important; I don't think reasonable people would deny that." ???Other experts see more utility in the analysis. The study has succeeded in providing "a conservative estimate of what the environment does for us," said Dr. Stuart Pimm, an ecologist at the University of Tennessee who wrote a commentary on the Costanza study in Nature. "So often," he said, "people concerned with protecting the environment go up against these very highly detailed economic analyses and feel they don't have anything in kind with which to respond." In the tables of specific ecosystem services that accompany the study, he said, "what Costanza et al. has done is provide a checklist" that national and local policy makers can use in attempting to make a rough gauge of the economic worth of their natural assets. ???One table, for instance, lists specific ecosystem services, and their supposed value, for 11 biomes, or types of natural areas. These include the open ocean, estuaries, seagrass and algae beds, coral reefs, continental shelves, tropical forests, temperate forests, grasslands and rangelands, tidal marshes and mangroves, wetlands and flood plains and lakes and rivers. ???The next step, Dr. Costanza says, is to delineate more clearly the explicit linkages between particular local ecosystems and local economies. For example, how much of the value of the Louisiana shrimp catch is attributable to the wetlands in which the shrimp reproduce and grow? But since wetlands perform other services as well, the wetlands' value as a shrimp nursery would be only a minimum indication of their overall value. ???The same applies, for example, to the Catskill watershed, which serves other economic functions besides providing and cleaning New York City's water -- attracting tourists, for instance. "Nobody thinks the Catskills are worth only $4 billion," Dr. Daily said, referring to the cost of replacing the Catskills' water-cleansing function. ???Assuming the value of ecosystem services could eventually be established, how might economic policies be changed? For openers, Dr. Daily and others say, government subsidies that distort the value of natural resources -- in fisheries and logging, for example -- should be abolished. Also, tax incentives might be given to landowners to protect the long-term assets represented by natural capital rather than using them for short-term gain. ???Some experts advocate applying traditional economic arrangements to ecosystem services. For instance, Dr. Graciela Chichilnisky and Dr. Geoffrey Heal, economists at Columbia University, have proposed selling investment shares in a given ecosystem. Using the Catskill watershed as an illustration, they say that the capital thus raised would pay for preserving the watershed. Returns to investors would come either from a share of the costs saved by not having to build a treatment plant or, if the investment were private, by actually selling ecosystem services. In the case of a watershed, clean water would be sold. ???But, says Dr. Daily, "the first thing is getting the prices right." ???GRAPHIC: Chart: "The Value of the Natural World" A new attempt by 13 scientists to assign dollar vlues to essential services performed for the human economy by the natural world divides the services into the following 17 Categories. Gas Regulation Carbon dioxide/oxygen balance, ozone for ultraviolet protection Climate Regulation Greenhouse gas regulation Disturbance Regulation Storm protection, flood control, drought recovery Water Regulation Provision of water for irrigation, mills or transportation Water Supply Provision of water by watersheds, reservoirs and aquifers Erosion control and sediment Retention Prevention of soil loss by wind, runoff, etc; storage of silt in lakes and wetlands Soil Formation Weathering of rock and accumulation of organic material Nutrient Cycling Nitrogen fixation Waste Treatment Pollution control, detoxification Pollination Pollinators for plant reproduction Biological Control Predator control of prey species Refuges Nurseries, habitat for migratory species Food Production Production of fish, game, crops, nuts and fruits by hunting, fishing, gathering of subsistence farming Raw Materials Production of lumber, fuel or fodder Genetic Resources Medicines, Resistance genes for crops, ornamental plant species, pets Recreation Ecotourism, sports fishing, other outdoor recreation Cultural Esthetic, artistic, educational, spiritual and/or scientific values of ecosystems (Source: Nature) From checker at panix.com Thu Jul 7 14:49:54 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:49:54 -0400 (EDT) Subject: [Paleopsych] Newsday: Test Seeks to Measure Students' Web IQ Message-ID: Test Seeks to Measure Students' Web IQ http://www.newsday.com/technology/business/wire/sns-ap-internet-intelligence,0,7931562,print.story?coll=sns-ap-technology-headlines By MICHELLE LOCKE Associated Press Writer 5.7.2 LONG BEACH, Calif. -- Students apply to college online, e-mail their papers to their professors and, when they want to be cheeky, pass notes in class by text-messaging. But that doesn't necessarily mean they have a high Internet IQ. "They're real comfortable instant-messaging, downloading MP3 files. They're less comfortable using technology in ways that require real critical thinking," says Teresa Egan of the Educational Testing Service. Or as Lorie Roth, assistant vice chancellor of academic programs at California State University puts it: "Every single one that comes through the door thinks that if you just go to Google and get some hits -- you've got material for your research paper right there." That's why Cal State and a number of other colleges are working with ETS to create a test to evaluate Internet intelligence, measuring whether students can locate and verify reliable online information and whether they know how to properly use and credit the material. "This test measures a skill as important as having mathematics and English skills when you come to the university," says Roth. "If you don't come to the university with it, you need to know that you are lacking some skills that educated people are expected to have." A preliminary version of the new test, the Information and Communication Technology Literacy Assessment, was given to 3,300 Cal State students this spring to see how well it works, i.e. testing the test. Individual scores aren't being tallied but campuses will be getting aggregate reports. Next year, the test is expected to be available for students to take on a voluntary basis. Cal State is the lead institution in a consortium which includes UCLA, the University of Louisville, the California Community College System, the University of North Alabama, the University of Texas System and the University of Washington. Some of the institutions involved are considering using the test on incoming students to see if they need remedial classes, says Egan, ETS' project manager for the Information and Communication Technology Literacy Assessment. Other schools are thinking about giving the test as a follow up to communications courses to gauge curricula efficiency. Robert Jimenez, a student at Cal State-Fullerton who took the prototype test this spring, gives it a passing grade. "It was pretty good in that it allowed us to go ahead and think through real-life problems." Sample questions include giving students a simulated page of Web search results on a particular subject and asking students to pick the legitimate sources. So, a question on bee sting remedies presents a choice of sites ranging from ads to a forum for herb treatments to (the correct answer) a listing from the National Institutes of Health, identifiable by having "nih" in the URL (site address) along with the ".gov" suffix that connotes an official government listing. High tech has been a fixture of higher ed for some years. A 2002 report from the Pew Internet & American Life Project found that 79 percent of college Internet users thought the Internet had a positive impact on their academic experience. More than 70 percent used the Internet more than the library and 56 percent said e-mail improved their relationships with professors. Of course, some of those text-messaging students are still being taught by professors whose idea of a personal data assistant is a fresh pad of Post-Its. "The problem with technology and education is how do you fit the new technology into existing curriculum lesson plans. You can't add more class time and it's much easier to just keep teaching the way you were," says Steve Jones, a co-author on the Pew study and a communications professor at the University of Illinois at Chicago. Jones folds lessons on Internet use into his classes. And he doesn't mince words about students who try the "click, copy and paste" approach to homework. "I tell the students, `Some of you are going to put off this paper until the night before. You're going to go to Google, type in search words and just look at the top five hits and use those. I'm going to grade you on this. I'm going to look at these sources and so let's talk about how to evaluate sources.'" Which doesn't necessarily mean they all "suddenly become fabulous information evaluators and seekers, but it gives them a little bit of an idea that this isn't something that's apart from learning." Jones also finds himself learning from students, who are trying out new things like blogs and collaborating with other students online to create new sources of information. He thinks assessing students' Internet skills could be useful in figuring out ways to help them do better research but cautions that it's tough to test on something as changeable as the Internet. Roth notes that the bulk of the assessment focuses on critical thinking skills, being able to analyze the legitimacy of Web sites, and knowing the difference between properly cited research and plagiarism, things that "haven't changed very much since I enrolled in college in 1969." For today's students, working on the Net means not having the safety net of references vetted by campus librarians. But Roth isn't nostalgic. "Anybody want to go back to the bad old days when you had manual typewriters, and you had to get up and walk to the library to look up something?" she says with a laugh. "I don't think so." * __ On the Net: http://www.calstate.edu http://www.ets.org/ictliteracy/ From checker at panix.com Thu Jul 7 14:50:02 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:50:02 -0400 (EDT) Subject: [Paleopsych] WSJ: (Nobel Sperm Bank) Nick Schulz: Turbulence in the Gene Pool Message-ID: Nick Schulz: Turbulence in the Gene Pool WSJ, 5.7.5 http://online.wsj.com/article/0,,SB112052650043477006,00.html* It was over breakfast one February morning in 2001 that Tom Legare, a precocious but otherwise typical American teenager, learned from his mother that his real father "was a Nobel Prize winner." The man married to his mother for so many years, it turned out, was in fact not biologically related to him. Rather, another man was, presumably a "brilliant scientist" (name unknown) who had contributed to the Repository for Germinal Choice, a "genius" sperm bank founded by businessman Robert K. Graham in 1980. Thus Tom was one of a far-flung brood, since many other infertile couples -- like the Legares -- had availed themselves of this high-end gene pool. Tom's existence really begins with Graham, an entrepreneur who had made a fortune in the eyeglass business only to turn his attention in the late 1970s to what were, for him anyway, grander and more noble pursuits. He hoped to save the human race from what he deemed a "genetic catastrophe." An experiment in 'positive eugenics' -- using Nobelists to boost inherited IQ. As David Plotz tells it in "*The Genius Factory*" (Random House, 262 pages, $24.95), a wonderfully readable and eye-opening account, Graham feared that, in late-20th-century America, "cradle-to-grave social welfare programs paid incompetents and imbeciles to reproduce. As a result, 'retrograde humans' were swamping the intelligent minority." The only way to save mankind was for the best intellectual "specimens" of the species to reproduce at a higher rate. And the best way to make that happen was to have the planet's brightest -- Nobel Prize-winners -- donate their genetic material for the betterment of humanity. "Ten men of high intelligence," Graham mused, "can be more effective than 1,000 morons." He envisioned replacing Darwin's natural selection with "intelligent selection." But the plan met resistance. "It's pretty silly," Max Delbruck, a Nobel winner in medicine said at the time. Nobelist Linus Pauling joked that "the old-fashioned way is still best." Such caviling was a problem. Graham's sperm bank had no hope of being marketed "to a skeptical public," Mr. Plotz writes, unless he scored some blue-ribbon sperm. That's when William Shockley stepped up to the cup, er, plate. Shockley had won a Nobel for his work on the transistor and helped launch the Silicon Valley tech boom. He also shared Graham's pessimistic view of mankind's genetic destiny, arguing that "humanitarianism gone berserk" -- by which he meant mostly welfare -- was keeping undesirables alive and, worse, procreating. Thus he was eager to do his part to skew the gene pool toward intelligence. With Shockley's seed, Graham was ready to go, and on Feb. 29, 1980, the Nobel Prize sperm bank was introduced to the world. According to Mr. Plotz, the bank was a product of a specific time, place and culture. California in the 1970s was the home of America's freedom-loving political and technological vanguard, filled with a spirit of limitless possibility. It was embodied in everyone from Ronald Reagan to the academics at Caltech and Stanford and the digital pioneers of Silicon Valley. An extreme, messianic form of the era's spirit took root in men like Graham and Shockley. In 2001, Mr. Plotz set out to find out what happened to the fruit of this odd, two-decade experiment in "positive" eugenics. "The Nobel sperm bank kids, I realized, were messengers from our future." Over the years, other journalists had tried to learn more about the donors and children of the bank, but unsuccessfully: The bank was set up to ensure that the donors didn't know who their kids were and that the kids didn't know who their fathers were. So Mr. Plotz, a writer and editor with the online magazine Slate, figured that he would use the Internet to break through this secret screen. He published an article in Slate asking anyone who had ever been involved with the bank or who knew anything about its donors or children to contact him. And the experiment worked. Donors and mothers reached out by email. Over three years he was able to track down dozens of children of the bank deposits. In several tense, hilarious and touching episodes, Mr. Plotz describes how he midwifed the meeting of some children and their donor fathers. Regardless of the ethical merits of Graham's plan -- or, one should say, its demerits -- the bank was doomed to failure. Most Nobelists are older, relative to the rest of the population. Their sperm is generally of substandard quality -- lower in number and, so to speak, reproductive energy -- and thus less likely to fertilize an egg. So shortly after opening the bank, Graham had to seek out as donors younger, high-IQ folks who weren't Nobelists. But once Graham watered-down the requirements, the reason for the bank's existence -- its cachet -- was gone. Standards slipped. As Tom Legare was later to find out, his father didn't win a Nobel. No genius, he was something of a lovable loser living next to a drug den in Florida. As for the other "genius" children: Many were moderately bright, but only one exceptionally so. Many had psychological problems, although it is impossible to say whether the "genius" part of their makeup played a part. The story of this genetic experiment is a rare contribution to the debate over biotechnology, which usually ping-pongs between dystopians and techno-enthusiasts making broad, philosophical claims. By giving readers the case study of a serious -- and failed -- effort to engineer a better human race, Mr. Plotz brings the discussion back down to earth, where it belongs. */Nick Schulz/**/ is editor of TechCentralStation.com ^1 ./* From checker at panix.com Thu Jul 7 14:50:11 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:50:11 -0400 (EDT) Subject: [Paleopsych] Pravda: Left-handed human race to make the world a better place Message-ID: Left-handed human race to make the world a better place http://english.pravda.ru/printed.html?news_id=15765 5.7.6 [Now be careful evaluating the merits of this report. My first reaction is that Pravda is not reliable since it takes ESP for granted, while in this country and the world at large it is a matter of great controversy. What kind of quality control does Pravda now make over its writers? I recall the case of Bill White, who called himself a "libertarian socialist" and wrote articles quite critical of Jewish influence on American foreign policy (fine with me to get points of view not widely circulated in the United States) but veered off into implausible conspiracies (as opposed to plausible ones, of course). [My second reaction is that the article does make several interesting points. I'd love to think left-handers have higher IQs, since I'm left-handed. [My third reaction is that I do not recall this claim and probably would have remembered it if I had. So I'm inclined to view all the facts with suspicion, except that Leonardo, Tolstoy, Chaplin, and Kidman are lefties.] Scientists say that the number of left-handed individuals grows rather fast in the world today Specialists calculated that every tenth human being is left-handed. The total amount of lefthanders living in the world reaches over 600 million. According to experts' estimates, there will be a billion of left-handed people living on planet Earth by 2020. The world will be different against the background of such a trend, scientists say. 'The number of left-handed babies that were born in 2005 doubled the amount of left-handed children, which saw the light in 1990,' doctor of biological sciences, Alexander Dubov said. 'Mankind is changing slowly. However, it is not about degradation of the human civilization at all. Quite on the contrary, people become more perfect,' the professor said. Latest research works conducted in many countries of the globe showed that the IQ level of left-handed people is higher in comparison with the one of right-handed individuals. Every fifth outstanding person is left-handed as a rule. Furthermore, the people, who can boast of having extraordinary abilities, are left-handed too. 'There are a lot of extrasensorial individuals among them,' doctor of medical sciences, Alexander Lee said. 'We checked the supposition. There are hardly any right-handers among those, who have the gift of remote viewing, telepathy, or X-ray viewing,' the doctor said. Right and left-handers are virtually different types of people with their own special mindsets and perception of the world. 'They get along with each other perfectly, but there is a hidden evolutionary struggle taking place between them, which reminds the struggle between primeval humans, Cro-Magnon and Neanderthal men. It seems to me that left-handers will eventually win the fight owing to their anomalous abilities,' scientist of anomalous phenomena, Pyotr Chereda said. Modern scientists have already concluded that the left-handed human race will change the world; the humanity will become more intellectual and extrasensorial. It is noteworthy that specialists do not have an explicit explanation to the mystery of the left-handed phenomenon. They discovered, however, that a human being takes either a left or a right way of development in mother's womb. Scientists photographed growing fetuses with the help of a special ultrasonic camera. If an unborn baby was trying to take a left hand in its mouth, it would be born as a left-handed infant. Specialists concluded that something happens in fetus's brain during the third of the fourth month of its development. The right hemisphere takes advantage of the left one and claims responsibility for those parts of the brain, which will later be in charge of speaking and writing abilities. The brain takes a significant change, when the left hand of a person eventually plays the dominating role. If the transformation was not that strong or if it was incomplete, it is possible for a person to take a step back. In this case, a person could be described as a right-left-handed individual, i.e. he or she would possess the qualities of both right and left-handers. It brings up the idea that something similar happened to Russian President Putin. Mr. Putin seems to be a right-handed person. However, he wears his watch on the right wrist. In addition, Putin often uses his left hand to take notes out of his right pocket. The left-handed phenomenon may probably be explained with genetic peculiarities. The ability is often handed down across generations. There are certain bizarre peculiarities, though, which can hardly be explained with inheritance. A recent research, which was conducted among 20,000 people, showed that left-handed people are usually delivered by women over 30 years of age. Furthermore, left-handers are usually born prematurely, during the second half of the year. Left-handed people have remarkable abilities to perceive sounds and intonations absolutely clearly and distinguish superfine color shades. They have picturesque memory, which preserves bright impressions for quite long periods of time. Most outstanding left-handers: Using only left hand: Leonardo Da Vinci painted 'Mona Lisa'; Leo Tolstoy wrote 'War and Peace;' Charlie Chaplin played with his stick. Nicole Kidman combs her hair holding a hairbrush in her left hand. From checker at panix.com Thu Jul 7 14:50:21 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:50:21 -0400 (EDT) Subject: [Paleopsych] NYT Op-Ed: Finding Design in Nature Message-ID: Finding Design in Nature New York Times Op-Ed, 5.7.7 http://www.nytimes.com/2005/07/07/opinion/07schonborn.html By CHRISTOPH SCH?NBORN Vienna EVER since 1996, when Pope John Paul II said that evolution (a term he did not define) was "more than just a hypothesis," defenders of neo-Darwinian dogma have often invoked the supposed acceptance - or at least acquiescence - of the Roman Catholic Church when they defend their theory as somehow compatible with Christian faith. But this is not true. The Catholic Church, while leaving to science many details about the history of life on earth, proclaims that by the light of reason the human intellect can readily and clearly discern purpose and design in the natural world, including the world of living things. Evolution in the sense of common ancestry might be true, but evolution in the neo-Darwinian sense - an unguided, unplanned process of random variation and natural selection - is not. Any system of thought that denies or seeks to explain away the overwhelming evidence for design in biology is ideology, not science. Consider the real teaching of our beloved John Paul. While his rather vague and unimportant 1996 letter about evolution is always and everywhere cited, we see no one discussing these comments from a 1985 general audience that represents his robust teaching on nature: "All the observations concerning the development of life lead to a similar conclusion. The evolution of living beings, of which science seeks to determine the stages and to discern the mechanism, presents an internal finality which arouses admiration. This finality which directs beings in a direction for which they are not responsible or in charge, obliges one to suppose a Mind which is its inventor, its creator." He went on: "To all these indications of the existence of God the Creator, some oppose the power of chance or of the proper mechanisms of matter. To speak of chance for a universe which presents such a complex organization in its elements and such marvelous finality in its life would be equivalent to giving up the search for an explanation of the world as it appears to us. In fact, this would be equivalent to admitting effects without a cause. It would be to abdicate human intelligence, which would thus refuse to think and to seek a solution for its problems." Note that in this quotation the word "finality" is a philosophical term synonymous with final cause, purpose or design. In comments at another general audience a year later, John Paul concludes, "It is clear that the truth of faith about creation is radically opposed to the theories of materialistic philosophy. These view the cosmos as the result of an evolution of matter reducible to pure chance and necessity." Naturally, the authoritative Catechism of the Catholic Church agrees: "Human intelligence is surely already capable of finding a response to the question of origins. The existence of God the Creator can be known with certainty through his works, by the light of human reason." It adds: "We believe that God created the world according to his wisdom. It is not the product of any necessity whatever, nor of blind fate or chance." In an unfortunate new twist on this old controversy, neo-Darwinists recently have sought to portray our new pope, Benedict XVI, as a satisfied evolutionist. They have quoted a sentence about common ancestry from a 2004 document of the International Theological Commission, pointed out that Benedict was at the time head of the commission, and concluded that the Catholic Church has no problem with the notion of "evolution" as used by mainstream biologists - that is, synonymous with neo-Darwinism. The commission's document, however, reaffirms the perennial teaching of the Catholic Church about the reality of design in nature. Commenting on the widespread abuse of John Paul's 1996 letter on evolution, the commission cautions that "the letter cannot be read as a blanket approbation of all theories of evolution, including those of a neo-Darwinian provenance which explicitly deny to divine providence any truly causal role in the development of life in the universe." Furthermore, according to the commission, "An unguided evolutionary process - one that falls outside the bounds of divine providence - simply cannot exist." Indeed, in the homily at his installation just a few weeks ago, Benedict proclaimed: "We are not some casual and meaningless product of evolution. Each of us is the result of a thought of God. Each of us is willed, each of us is loved, each of us is necessary." Throughout history the church has defended the truths of faith given by Jesus Christ. But in the modern era, the Catholic Church is in the odd position of standing in firm defense of reason as well. In the 19th century, the First Vatican Council taught a world newly enthralled by the "death of God" that by the use of reason alone mankind could come to know the reality of the Uncaused Cause, the First Mover, the God of the philosophers. Now at the beginning of the 21st century, faced with scientific claims like neo-Darwinism and the multiverse hypothesis in cosmology invented to avoid the overwhelming evidence for purpose and design found in modern science, the Catholic Church will again defend human reason by proclaiming that the immanent design evident in nature is real. Scientific theories that try to explain away the appearance of design as the result of "chance and necessity" are not scientific at all, but, as John Paul put it, an abdication of human intelligence. Christoph Sch?nborn, the Roman Catholic cardinal archbishop of Vienna, was the lead editor of the official 1992 Catechism of the Catholic Church. From checker at panix.com Thu Jul 7 14:50:31 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:50:31 -0400 (EDT) Subject: [Paleopsych] NYT: Films Take a More Sophisticated Look at Teenage Sex Message-ID: Films Take a More Sophisticated Look at Teenage Sex New York Times, 5.7.6 http://www.nytimes.com/2005/07/06/movies/06sex.html By [3]CARYN JAMES In Miranda July's shrewdly observed [4]"Me and You and Everyone We Know," a 14-year-old boy and his 7-year-old brother sit in front of a computer screen engaging in an increasingly common form of sex education: an online chat with an anonymous woman. Although the 14-year-old is savvy enough to guess that they could be talking, say, to a grossly overweight man instead of some hot babe, his little brother suggests, "Ask if she likes baloney," then innocently offers a nonsensical, physically impossible act that reminds us he is not so far from his potty training days. The online response is, "You are crazy, and you are making me very hot." Precocious sexual knowledge - far beyond what children and teenage characters can absorb, and often with devastating consequences - has become a staple of current independent films. In the French film [5]"Lila Says," the title character is a 16-year-old whose wealth of sexual knowledge and free-wheeling behavior leads the town to think she may actually be a whore. In [6]Gregg Araki's unexpectedly eloquent [7]"Mysterious Skin," two boys who are molested by their Little League coach grow up to be a teenage hustler and a guy who believes he was abducted by aliens. (All three films are playing in New York and a handful of other cities, and will expand to more cities through the next month or so.) And movies with similar themes will arrive in the next month, including [8]Don Roos's comic romance [9]"Happy Endings" and the satiric [10]"Pretty Persuasion." But while these filmmakers are highly aware of the dangers such early knowledge can pose - from Internet predators to unwanted pregnancy - their films do not display the knee-jerk judgments you might expect, and that shaped the 2003 film [11]"Thirteen." Where "Thirteen" was praised for its audacity in depicting 13-year-olds having sex and doing drugs, it was really a traditional cautionary tale. The current films are more complex. They often blame the big, wide media world and other social influences that cause children to grow up too fast. But they also accept this early loss of innocence as the new way of the world and move on from there. In their bewildered acceptance of this new reality, the films reflect the current social moment, in all its fraught confusion, more astutely than any alarmist work could. It's not surprising that all these films are smaller, independent works; they can afford to address the riskier themes that big-budget movies avoid. The current films also share sophisticated narratives and graceful styles that make their unsettling themes palatable. "Me and You" is directly about the romance of a lonely artist (played by Ms. July) and a recently separated shoe salesman (John Hawkes) whom she willy-nilly decides is her soul mate. But the difficulty of forging a connection is reflected in the next generation - the young brothers are the salesman's sons - in which children and teenagers confront a disorienting sexual world. The family's neighbors include two slightly older teenage girls who use the 14-year-old brother as a practice object for oral sex, asking him to judge which of them does it better. When the father's seemingly respectable co-worker makes lewd suggestions to the girls, they teasingly kiss in front of him. But when they finally dare to ring his doorbell, he cowers and hides. Ms. July's sense of a dangerous world encroaching is deflected by such last-minute twists. Yet it is still chilling when the 7-year-old arranges a real-life date with the mysterious online lover. And it may be even more chilling that the father is not callous or neglectful, just hapless and not very smart, an ordinary guy. Ms. July acknowledges the risks of precocious, half-baked sexual knowledge, but in keeping with the endearing tone of her film, willfully evades those dangers. There is no such evasion in "Lila Says." Set in a poor Marseille neighborhood, Ziad Doueiri's gripping film seems headed for tragedy from the start. The beautiful blond Lila is not just another sexually active 16-year-old, no longer a rarity. She is so bluntly, openly sexual that she enters the film by offering to expose herself to a stranger, Chimo, the 19-year-old who falls in love with her yet is intimidated by her experience. Chimo's friends regard Lila as a slut; yet if the brutal finale they set in motion is all too predictable, one crucial element is not. Chimo discovers Lila's scrapbook, in which she has pasted magazine articles about subjects like amateur porn on the Internet, clippings that suggest how much of her sexual knowledge was shaped by a world she was not ready to understand. "Lila Says" is so delicately balanced that it manages to have things both ways. It is erotic, notably in a sexual encounter between Lila and Chimo on a motorbike, yet also conveys a sad sense of lost innocence. Despite the film's melodrama, that balance creates a hauntingly realistic aura. Its least convincing element comes when Lila's aunt and guardian makes a pleading sexual advance toward her. The theme is never picked up again, so the abuse seems like a forced, convenient explanation for Lila's behavior. The idea of childhood abuse is used more intelligently in "Mysterious Skin." The molestation scenes are not graphic, but they are so clear and depicted with such immediacy that at first it seems the film has crossed a line into a completely nonjudgmental realm. But "Mysterious Skin" adheres to the boys' points of view so rigorously that the abuse reflects their own confusion, just as the film's lyricism suggests their emotional escape strategies. By the end, when the anguish inflicted on the boys becomes apparent, we see that the film realizes the abuse was monstrous. Mr. Araki has always been a provocative filmmaker, not an ingratiating one, and while "Mysterious Skin" is lucid about the horrible violation of the boys, it refuses to preach at us, and much of its power comes from that unflinching approach. While all these films matter-of-factly assume that sexual knowledge arrives earlier and earlier, "Happy Endings" is essentially a cheerful movie, even though its plot is set off when a teenage stepbrother and stepsister have sex that results in a pregnancy. "Pretty Persuasion" is caustic, as several 15-year-old girls maliciously and falsely accuse a teacher of abuse, setting off a media circus. That such varied tones can be spun from a common idea says that precocious sexuality is considered a pervasive part of our world, even if the filmmakers have no better idea of what to do with that knowledge than the 7-year-old knows what to do on his date. References 3. http://query.nytimes.com/search/query?ppds=bylL&v1=CARYN%20JAMES&fdq=19960101&td=sysdate&sort=newest&ac=CARYN%20JAMES&inline=nyt-per 4. http://movies2.nytimes.com/gst/movies/movie.html?v_id=312502&inline=nyt_ttl 5. http://movies2.nytimes.com/gst/movies/movie.html?v_id=315403&inline=nyt_ttl 6. http://movies2.nytimes.com/gst/movies/filmography.html?p_id=79831&inline=nyt-per 7. http://movies2.nytimes.com/gst/movies/movie.html?v_id=291995&inline=nyt_ttl 8. http://movies2.nytimes.com/gst/movies/filmography.html?p_id=167026&inline=nyt-per 9. http://movies2.nytimes.com/gst/movies/titlelist.html?v_idlist=126064;126065;295227&inline=nyt_ttl 10. http://movies2.nytimes.com/gst/movies/movie.html?v_id=312864&inline=nyt_ttl 11. http://movies2.nytimes.com/gst/movies/titlelist.html?v_idlist=278975;160643&inline=nyt_ttl From anonymous_animus at yahoo.com Thu Jul 7 20:55:55 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Thu, 7 Jul 2005 13:55:55 -0700 (PDT) Subject: [Paleopsych] genes made me do it In-Reply-To: <200507071800.j67I0MR06716@tick.javien.com> Message-ID: <20050707205555.97216.qmail@web30808.mail.mud.yahoo.com> >>Consider: you are no longer responsible for anything. Sound familiar? Once it was the devil. Now it is the gene that made you do it. You are officially off the hook. It isn't your fault at all. It's your faulty genes. It gets even better. Not only is it not your fault, but you actually are a victim, a victim of your own toxic gene pool.<< --We ought to distinguish between the scientific question, "What causes human behavior" from the political question, "How do we encourage people to control behavior that might harm society". Confusing the two questions is a bad idea. It's entirely possible that some people are genetically driven to violence. But that would leave us where we already are: with a group of people who can't or won't control their behavior. We may say "You must control yourself" but we have no faith that the command will be enough. So we confine criminals instead -- Exactly what we would do if it were proven their genes made them do it. The only real difference would be that we'd no longer view "deserving it" as reason to heap scorn on those we've incarcerated. The most violent criminals were almost uniformly treated with extreme abuse in their formative years, and we already KNOW that shaming them only produces more violence rather than less. Keeping people who can't (or won't -- it makes no practical difference) control themselves away from situations where they could harm others is still the only reliable method of prevention. However, identifying people at risk for violence, whether it's a genetic trait or a result of early abuse and role modeling, is a good idea. Pre-emptive incarceration would not be an acceptable strategy, but providing counselling and cognitive therapy might counteract any existing tendency toward violence. Cognitive therapy can identify subliminal thoughts that accelerate violence (demonization of others, shifting blame, shame spiraling into rage, etc) and increase the individual's ability to calm himself and counteract the hypnotic trance-like triggers that would otherwise lead to reactive violence. It may also be helpful to view groups which demonize one another as victims of bad programming, and introduce counter-programs enabling each side to see members of the other as human rather than as symbols of evil. Regardless of whether free will exists or not, it's a good thing to be able to respond in the early stages, before violence breaks out, rather than merely punishing people after the fact. Perhaps the fear of society is not that people can't control themselves, but that by demonizing criminals we are accelerating their pathology. What if we're making things worse, by focusing on who deserves what kind of punishment, rather than how to interrupt patterns of violence before they become lethal? Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From anonymous_animus at yahoo.com Thu Jul 7 21:00:10 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Thu, 7 Jul 2005 14:00:10 -0700 (PDT) Subject: [Paleopsych] drugs In-Reply-To: <200507071800.j67I0MR06716@tick.javien.com> Message-ID: <20050707210011.75350.qmail@web30802.mail.mud.yahoo.com> >>In fact, I can honestly say that without cannabis, most of my scientific research would never have been done and most of my books on psychology and evolution would not have been written.<< --That's not surprising. Marijuana helps bridge brain hemispheres, enabling the kind of cross-contextual thinking that produces new ideas. This shouldn't be "politically incorrect". If a scientist, writer or artist says a glass of wine or whiskey helps him ponder a problem, few people will find it outrageous or even question the validity of the claim. But political correctness surrounding legal/illegal drugs makes it difficult for people to speak openly about their positive experiences with drugs other than alcohol or caffeine. Michael ____________________________________________________ Sell on Yahoo! Auctions ? no fees. Bid on great items. http://auctions.yahoo.com/ From checker at panix.com Thu Jul 7 22:19:34 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 18:19:34 -0400 (EDT) Subject: [Paleopsych] NYT: Who Stole Sleep? The Pillow as Perp Message-ID: Who Stole Sleep? The Pillow as Perp New York Times, 5.7.7 http://www.nytimes.com/2005/07/07/fashion/thursdaystyles/07online.html By MICHELLE SLATALLA SHORTLY before midnight the other night, someone leaving the movie theater a block and a half from my house dropped car keys on the pavement. Actually, I don't know if the jingling came from keys or for that matter if there even was jingling, because like most mortals I slept through the incident. Even my dog Sticky, who has ears big enough to interest NASA, continued to snore. But not my husband. "What was that?" he shrieked, sitting upright as if bitten by a snake. The princess who could not sleep on a pea had nothing on my husband. His rest is often disturbed by distant barking, other people's air-conditioners and "something that sounds like a mosquito, only it never stings me." For years I snoozed through the drama. But recently neck and back twinges have begun to surge through my husband like electrical currents and have prompted him to jump out of bed, switch on the light and hop around. Even I can't sleep through that. I'm not the first spouse to stumble blearily toward the computer at 2 a.m. in search of a solution. But in the lonely predawn hours, as I considered the Internet's various suggestions - from sleep masks like the foldable Dreamlite Relaxation model ($6.95 at [3]Dreamessentials.com) to white noise machines like the Marsona Sleep Mate 980 ($52.95 at [4]Naturestapestry.com) - one possibility intrigued me above all others. Maybe we needed better pillows. Ours, old and flat and musty, provided about as much neck support as a saltine. Could the pillows be sabotaging my husband's sleep? "A pillow is important if a person has poor sleep to begin with," said Dr. Clete A. Kushida, director of the Stanford University Center for Human Sleep Research. "The environment is important." But which pillow? There are no official standards for pillows and no research to prove that one type is better than another. "The studies haven't been done," Dr. Kushida said. "Basically it comes down to what is the most comfortable." Depending on who you are, that might mean [5]llbean.com's goose down damask pillow (in sizes from standard to king and in fills ranging from soft to firm, $49 to $99). Or Overstock.com's Circle of Down pillow ($29.99). Or [6]Livingincomfort.com's hypoallergenic pillow ($17.88). Or maybe the answer is a synthetic pillow from [7]Bedbathandbeyond.com (from $7.99 for the Jumbo Gusset to $79.77 for the standard-size Indulgence Supreme Thermo-Sensitive). I needed guidance. "How can I tell if my husband is a Thermo-Sensitive type or a Circle of Down man?" I asked Dr. James Maas, a professor and sleep researcher at Cornell University. "Given all the options, I wonder if anyone is sleeping on the right pillow." Professor Maas said that pillow issues affect a great number of Americans. "Somewhere near 50 percent of the country is sleep deprived," he said during a phone interview. "This country is a country of walking zombies, mostly due to sleep length but also to poor quality of sleep." Professor Maas recommended that my husband cut back on his caffeine intake and that we create a bedroom that was cool, dark and comfortable (which I figured was a nice way of saying our 95-pound dog should get off the bed). As for pillows, the difference comes down to down versus synthetic fill, Professor Maas said. A good pillow of either stuff should last up to 10 years, he said. You can test your pillow to find out if it's past its prime. "You take your pillow," he said. "Fold it in half. If it doesn't spring forward and open instantly by itself, you've got a dead pillow. Replace it." Professor Maas said he liked the quality of pillows manufactured by United Feather and Down, an Illinois company whose products, both down and synthetic, sell under various private labels. For instance United Feather and Down's Insuloft down and PrimaLoft synthetic-fill pillows are for sale online at [8]thecompanystore.com, [9]Landsend.com, [10]Potterybarn.com and llbean.com. "We do a wide variety of fills," said Becky McMorrow, United Feather and Down's marketing manager. "Every retail customer tweaks the pillow to have an exclusive style. Restoration Hardware and Williams-Sonoma use the same fill but not the same fabric cover. Pottery Barn used a damask stripe." No matter where you shop, expect to pay from $29 to $59 for a good synthetic pillow and from $59 to $129 for a goose down pillow with a minimum of 550 fill power, Ms. McMorrow said. "Fill power is a measurement of how lofty an ounce of down is and how high it comes up on a beaker after it's compressed," Ms. McMorrow said. After ascertaining a few facts about my husband - mostly sleeps on his side, switches back and forth between a flat pillow and a fluffier one as the night progresses - Ms. McMorrow mailed me four models to test. I did not feel it necessary to mention the experiment to him; he has enough on his mind. The first night, I discreetly slipped a PrimaLoft synthetic-fill Side Sleeper into his pillowcase. Gusseted to provide an even sleep surface and neck support for a side sleeper, it was similar to a $39 model from Bedbathandbeyond.com. Then I turned out the light and lay poised to take notes as he fell into an immediate deep sleep. Thirty minutes passed without a peep out of him. Then 60. Then I fell asleep. The next night I introduced the fluffier Insuloft down-filled Side Sleeper (very like a $99 version at [11]Realgoods.com). After 30 minutes he sat up and asked suspiciously, "Do I hear a raccoon?" From this I deduced that while both Side Sleepers provided neck support, he preferred the denser texture of synthetic fill. The third night he also slept well on a down-synthetic blend called the Lyocell (similar to a pillow sold at thecompanystore.com for $89). By the fourth night I was the one who had earned the right to sleep on the Face Saver with "aloe-soft fabric" to prevent wrinkles. The pillow is to go on sale this fall on the Home Shopping Network for about $35. The conclusion? I bought all the pillows, because all four were an improvement over our old ones. The dog thought so, too. E-mail: [12]slatalla at nytimes.com From Euterpel66 at aol.com Fri Jul 8 04:33:59 2005 From: Euterpel66 at aol.com (Euterpel66 at aol.com) Date: Fri, 8 Jul 2005 00:33:59 EDT Subject: [Paleopsych] genes made me do it Message-ID: <13d.16b6a455.2fff5c37@aol.com> In a message dated 7/7/2005 4:59:17 P.M. Eastern Daylight Time, anonymous_animus at yahoo.com writes: >>Consider: you are no longer responsible for anything. Sound familiar? Once it was the devil. Now it is the gene that made you do it. You are officially off the hook. It isn't your fault at all. It's your faulty genes. It gets even better. Not only is it not your fault, but you actually are a victim, a victim of your own toxic gene pool.<< --We ought to distinguish between the scientific question, "What causes human behavior" from the political question, "How do we encourage people to control behavior that might harm society". Confusing the two questions is a bad idea. It's entirely possible that some people are genetically driven to violence. But that would leave us where we already are: with a group of people who can't or won't control their behavior. We may say "You must control yourself" but we have no faith that the command will be enough. So we confine criminals instead -- Exactly what we would do if it were proven their genes made them do it. The only real difference would be that we'd no longer view "deserving it" as reason to heap scorn on those we've incarcerated. The most violent criminals were almost uniformly treated with extreme abuse in their formative years, and we already KNOW that shaming them only produces more violence rather than less. Keeping people who can't (or won't -- it makes no practical difference) control themselves away from situations where they could harm others is still the only reliable method of prevention. However, identifying people at risk for violence, whether it's a genetic trait or a result of early abuse and role modeling, is a good idea. Pre-emptive incarceration would not be an acceptable strategy, but providing counselling and cognitive therapy might counteract any existing tendency toward violence. Cognitive therapy can identify subliminal thoughts that accelerate violence (demonization of others, shifting blame, shame spiraling into rage, etc) and increase the individual's ability to calm himself and counteract the hypnotic trance-like triggers that would otherwise lead to reactive violence. It may also be helpful to view groups which demonize one another as victims of bad programming, and introduce counter-programs enabling each side to see members of the other as human rather than as symbols of evil. Regardless of whether free will exists or not, it's a good thing to be able to respond in the early stages, before violence breaks out, rather than merely punishing people after the fact. Perhaps the fear of society is not that people can't control themselves, but that by demonizing criminals we are accelerating their pathology. What if we're making things worse, by focusing on who deserves what kind of punishment, rather than how to interrupt patterns of violence before they become lethal? Michael Not only would society be able to identify undesirable behavioral tendencies, but individuals themselves would be able to reflect on why they act the way they do. It would have a name. Unknown and uncertainty are two of the most fearful adjectives describing states of mind known to our species. For her entire life my daughter knew that her behavior was self-destructive to sociality. Last year she found a name for her condition, Asperger's Syndrome. Since then, she's stopped kicking herself for her poor social skills and instead is takingmedication that has worked wonders. She recognizes the reasons for her difficulties and tries to work the skills that are necessary to social creatures. Lorraine Rice Believe those who are seeking the truth. Doubt those who find it. ---Andre Gide http://hometown.aol.com/euterpel66/myhomepage/poetry.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From Euterpel66 at aol.com Fri Jul 8 04:37:22 2005 From: Euterpel66 at aol.com (Euterpel66 at aol.com) Date: Fri, 8 Jul 2005 00:37:22 EDT Subject: [Paleopsych] Pravda: Left-handed human race to make the world a better pl... Message-ID: <19a.37584d48.2fff5d02@aol.com> In a message dated 7/7/2005 10:50:56 A.M. Eastern Daylight Time, checker at panix.com writes: Scientists say that the number of left-handed individuals grows rather fast in the world today Last semester I had a class of 25 individuals and 8 of them were left-handed. Left-handed is just something I notice because my daughter is left-handed. Lorraine Rice Believe those who are seeking the truth. Doubt those who find it. ---Andre Gide http://hometown.aol.com/euterpel66/myhomepage/poetry.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From anonymous_animus at yahoo.com Fri Jul 8 19:50:46 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Fri, 8 Jul 2005 12:50:46 -0700 (PDT) Subject: [Paleopsych] violence In-Reply-To: <200507081800.j68I0GR13416@tick.javien.com> Message-ID: <20050708195046.86225.qmail@web30808.mail.mud.yahoo.com> Lorraine says: >>Not only would society be able to identify undesirable behavioral tendencies, but individuals themselves would be able to reflect on why they act the way they do. It would have a name.<< --Good point. A Native American storyteller-therapist I once met described a man who would go into a trance when he fought with his wife. His leg would shake rhythmically and he would say to himself "It's never gonna change... it's never gonna change". He described how the man learned to notice the trance as it began and interrupt it. Others have reported positive results with mindfulness meditation, which may increase the ability of the prefrontal cortex to recognize automatic distortions of thinking and interrupt them. To me, that sounds a lot more useful than labeling people "evil". "I'm evil, I'll never change" is the last thought you want going through someone's mind if they have a problem with anger. And if anger stems from shame, thinking "I deserve to be punished" isn't going to have much positive effect either, it will only reinforce the shame and the rage it triggers. The one belief that would have a positive effect, "I can interrupt this cycle and change the outcome", is often lost in an avalanche of contempt and blame. >>For her entire life my daughter knew that her behavior was self-destructive to sociality. Last year she found a name for her condition, Asperger's Syndrome. Since then, she's stopped kicking herself for her poor social skills and instead is taking medication that has worked wonders.<< --I had similar experiences in my teens and 20's, kicking myself for not being able to express myself socially. Later, I learned to think of it as a feedback disorder and was better able to let anxiety exist without taking it as a sign of inevitable failure. I discovered I could communicate in text much better than in speech, because some of the timing and feedback issues are absent (mistakes in text can be backspaced, thoughts can come in floods without overloading speech, no awkward pauses, etc). I could process much better visually than orally. Before internet, the only thing that worked was LSD, and only for a day or two after taking a dose. For some reason, it enabled me to be fluid and trusting of unconscious processes, rather than focusing on every detail and being overloaded with anxiety and mechanical-feeling perfectionism. It was like the difference between crawling and flying, but not something I could do often. Do people with Asperger's communicate better in text as well? What medication helped your daughter? Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From checker at panix.com Fri Jul 8 22:19:10 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 18:19:10 -0400 (EDT) Subject: [Paleopsych] genes made me do it In-Reply-To: <20050707205555.97216.qmail@web30808.mail.mud.yahoo.com> References: <20050707205555.97216.qmail@web30808.mail.mud.yahoo.com> Message-ID: Michael, I think the criminal justice system has always done what you propose, though not formally, which says free will is either-or. In practice, sentences are handed down according to the degree of responsibility. It's ironic that those who have the least self-control are given the harshest punishments. On 2005-07-07, Michael Christopher opined [message unchanged below]: > Date: Thu, 7 Jul 2005 13:55:55 -0700 (PDT) > From: Michael Christopher > Reply-To: The new improved paleopsych list > To: paleopsych at paleopsych.org > Subject: [Paleopsych] genes made me do it > > >>> Consider: you are no longer responsible for > anything. Sound familiar? Once it was the devil. Now > it is the gene that made you do it. You are officially > off the hook. It isn't your fault at all. It's your > faulty genes. It gets even better. Not only is it not > your fault, but you actually are a victim, a victim of > your own toxic gene pool.<< > > --We ought to distinguish between the scientific > question, "What causes human behavior" from the > political question, "How do we encourage people to > control behavior that might harm society". Confusing > the two questions is a bad idea. > > It's entirely possible that some people are > genetically driven to violence. But that would leave > us where we already are: with a group of people who > can't or won't control their behavior. We may say "You > must control yourself" but we have no faith that the > command will be enough. So we confine criminals > instead -- Exactly what we would do if it were proven > their genes made them do it. The only real difference > would be that we'd no longer view "deserving it" as > reason to heap scorn on those we've incarcerated. The > most violent criminals were almost uniformly treated > with extreme abuse in their formative years, and we > already KNOW that shaming them only produces more > violence rather than less. Keeping people who can't > (or won't -- it makes no practical difference) control > themselves away from situations where they could harm > others is still the only reliable method of > prevention. > > However, identifying people at risk for violence, > whether it's a genetic trait or a result of early > abuse and role modeling, is a good idea. Pre-emptive > incarceration would not be an acceptable strategy, but > providing counselling and cognitive therapy might > counteract any existing tendency toward violence. > Cognitive therapy can identify subliminal thoughts > that accelerate violence (demonization of others, > shifting blame, shame spiraling into rage, etc) and > increase the individual's ability to calm himself and > counteract the hypnotic trance-like triggers that > would otherwise lead to reactive violence. It may also > be helpful to view groups which demonize one another > as victims of bad programming, and introduce > counter-programs enabling each side to see members of > the other as human rather than as symbols of evil. > Regardless of whether free will exists or not, it's a > good thing to be able to respond in the early stages, > before violence breaks out, rather than merely > punishing people after the fact. > > Perhaps the fear of society is not that people can't > control themselves, but that by demonizing criminals > we are accelerating their pathology. What if we're > making things worse, by focusing on who deserves what > kind of punishment, rather than how to interrupt > patterns of violence before they become lethal? > > Michael From checker at panix.com Sat Jul 9 00:05:00 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:05:00 -0400 (EDT) Subject: [Paleopsych] Gary North: Terrorism and Insurgency Message-ID: Gary North: Terrorism and Insurgency Gary North's REALITY CHECK Issue 462, 5.7.8 I had been planning to write on this topic before the terrorist bombings in London. The bombings have forced me to speed up my publishing timetable. We must distinguish carefully between insurgency and terrorism. There are overlaps in the two movements, but they are conceptually distinct. They are also tactically distinct. The insurgent is a guerilla. He is a defender against an invading military force. He is a warrior battling warriors. The terrorist is a member of an organization that seeks to disrupt civilian life as a means of regime change. His targets are civilians and civilian infrastructure. The insurgent has a limited goal: the expulsion of the invading troops. The terrorist has a much broader goal: the disruption of civil society in the name of a larger cause, usually messianic. He wants to heal the world of some all-encompassing evil. His is a never-ending battle. Unlike the insurgent, he never gets to the stage where he says, "We've won, so let's de-escalate." In the mid-1980s, the United States government began to provide Stinger ground-to-air missiles to Afghan resistance fighters. These were clearly insurgents. This technology forced Soviet pilots to fly ground-support planes at 15,000 feet rather than 5,000 feet. This forced a complete re-structuring of Soviet military tactics in Afghanistan. The ground troops no longer received reliable air cover. They pulled out. The Soviets lost the war because of this. Within two years of this retreat, the Soviet Union collapsed. The visibly collapsing socialist economy, coupled with the humiliation of the defeat in Afghanistan, gutted the self-confidence of the Soviet leaders. I have intermittently studied terrorism ever since 1963, when I took a course on modern Russian history. Modern Western terrorism began in late 19th century Russia. Lenin's older brother had been executed because he was a member of a Russian terrorist organization. This turned Lenin into a Marxist. In 1881, a terrorist group assassinated the Czar, who had been a liberal (for a Russian) reformer, the man who had freed the serfs. These terrorist groups were self- consciously attempting to destroy the Russian social order. They were revolutionary anarchists. They were convinced that terrorism would call forth repression by the state, which it did. Then, they believed, counter-repression terrorist movements could recruit followers to fight this oppression. They were right, in a way: counter-terrorism recruited Lenin for the cause of revolution. After he gained power in 1917, his initial targets were not the capitalists; they were the anarchists. He liquidated them or sent them to the slave labor camps in Siberia. The ultimate counter- terrorist was Lenin. The anarchist terrorists learned an old lesson: those who live by the sword die by the sword. AN ANCIENT TRADITION We are seeing an escalation of terrorism in Iraq on a scale that has no precedent in history. The suicide bombings are now not only daily, they are intra-daily. At the height of the Intifada, the State of Israel experienced one suicide bombing per month. On the evening news, we hear reports of multiple bombings all over the central part of Iraq: a dozen dead here, two dozen dead there. We are told officially that these bombers are outsiders coming into Iraq. We had better hope that these assessments are not true. If they are true, then the supply of suicide bombers will not decrease just because the United States pulls out of Iraq. If outsiders are the perpetrators, then they are not tied to national geography. They are not Iraqi nationalists. They are self-consciously part of a regional terrorist network, loosely structured. They are volunteering for service in a larger war, a war outside the geographical confines of their home countries. If these bombers are Iraqi nationalists, there is hope that our departure from Iraq will cool the conflict. The thought of regional terrorists in the Middle East is indeed terrifying. The West's oil is located there. If these are terrorists, as distinguished from insurgents, then they are close to the choke points of the West. If they are terrorists in service of an anti-Western, pro-Muslim cause, then they are closer to destructive power than any terrorists in history. They could create economic chaos in the West by closing the oil pipelines. Oil prices respond in volatile swings to minor marginal changes in output. In my view, this is what they are: Muslim fanatics who see themselves as part of a tradition going back to the Assassins. I have argued since 2001 that Osama bin Laden is self-consciously positioning himself as the Assassins' legendary and near-mythological leader, known as the Old Man of the Mountain. There is an attempt by Western analysts to deny the Islamic origin of these terrorists. This is a very difficult case to make. If it is true, then the terrorists are modern, secular, and nationalist, i.e., essentially Western. That would indicate a very small pool of "talent" to recruit from regionally. I think the story of nationalist suicide bombers from outside Iraq is the product of Western analysts' inability to imagine people who are dedicated to a religion with 1.2 billion adherents, who strap bombs to their bodies and blow up themselves and civilians. Western analysts find it easier to deal with modernism's terrorists than Islam's. I'm not buying it. This is an ancient war going back 1,400 years, with a terrorist tradition going back a thousand years. Bin Laden has self-consciously identified America as Crusaders. We should not ignore his rhetoric. He understands his "market." He has not made his appeal to regional nationalists. He has appealed to Muslims. It does not explain anything to label this "Islamofascism." This escalating movement has nothing to do with fascism, which was a short-lived movement of a posturing Italian ex-Communist, with military support from a German racist occultist. "Islamofascism" makes this movement sound modern. It is not modern, except in its technology, which is low tech and dirt cheap. Permit me to reprint part of an article that I wrote for Lew Rockwell's site in mid-September, 2001. I have not changed my opinion. * * * * * * * * * A terrorist group needs recruits. A terrorist movement needs recruits. If your strategy of terror involves the extensive use of suicide missions, you need very dedicated recruits. To get such recruits, you need the following: (1) a cause that is greater than any individual; (2) a sense of destiny associated with your cause; (3) the perception that a sacrificial act on behalf of your cause is never wasted or futile; (4) a vision of victory; (5) publicly visible events that demonstrate the power of your movement. > From what little I have read about Osama bin Laden, his movement possesses all five factors. He is especially skilled with respect to point five. He understands symbolism, and he understands Western media. This man is a formidable enemy of Western civilization. I believe that Americans have completely misunderstood the events of 9-11. The attack was not a direct assault on the United States primarily for the sake of making us fearful. It was part of a recruiting campaign. The response of the street people in Palestine was what he had in mind. He gave alienated Palestinians an event to celebrate. It also gave the Establishment Palestinians a chance to speak out against terrorism. That, too, was part of bin Laden's positioning. He is not Establishment. An extremist, especially a terrorist, must position himself as a member of the non-loyal opposition. Nothing I can imagine could have accomplished this better than the events of 9-11. The Poster If you want to understand what happened on 9-11, visualize a poster with bin Laden in a turban and flowing robes, pointing his index finger at you, with a slogan underneath: "Uncle Osama Wants You." That poster is aimed at the alienated folks back home. For Americans, the slogan is different: "Uncle Osama Wants You Dead." * * * * * * * * * FUSION OR CONFUSION? There are insurgent groups in Iraq. They have been killing American troops, about two a day, for two years. There are terrorist groups in Iraq. They have attacked civilians, local police, and some government figures. Are these groups unified? No. Is there a single chain of command? No. Is there any way to negotiate with any group's leader, who will act in the name of all groups? No. That is why this is now a never-ending war. There are many theories of what motivates the terrorists. This is appropriate: there are many motivations and many terrorist groups. Some are trying to foment a civil war. Others are no doubt dreaming of inflicting permanent damage on American foreign policy for the Middle East. Others are bin Laden's followers: Islamic radicals. All are agreed: Americans and their collaborators are targets. Americans are the symbol of Western power. Americans are there and therefore are convenient targets. The fact that there is a common enemy -- American troops and officials -- has led to confusion in the minds of Western analysts. They do not understand that the success of the insurgency in inflicting tactical damage on American troops has served as motivation for terrorists who see their cause in a much broader context, both geographically and historically. Terrorism and insurgency are not fused, but Western analysts are surely confused. They were confused going into Iraq, and they will probably remain confused after we leave Iraq. The terrorists are not confused. They have a goal: the overturning of the West's social order. They now see an historic opportunity. So can the Muslim in the street. This will make their recruiting easier. TAKING THE FIGHT ABROAD Whether the London bombings were the work of Muslim fanatics or anti-WTO fanatics, we do not know. What we know is that terrorism is spreading. The tactics of terrorism are being worked out in Iraq. Today, bombs. Tomorrow. . . ? The master tacticians were the inventors of the car bomb: the IRA. That invention appeared around 1973. It is gaining popularity. It almost brought down one of the Twin Towers in 1993. That would have cost 60,000 lives -- maybe twice that, if the second tower had been hit by a collapsing first tower, which might not have fallen straight down. Iraq has become the on-the-job training program for terrorists. Because the insurgency is perceived as local, which it is, the parallel terrorism is also seen as local, or at most regional. This is a convenient assumption. But is it accurate? To separate regional Middle Eastern terrorism from worldwide Islam is convenient for political analysts who are secular. They don't comprehend the idea of world conquest by an old religious movement that was from day one a military movement. The fact that most members of this religion have abandoned the idea of conquest by force does not deal with the problem of bin Laden, who is a representative of a respected sub-tradition. The insurgency may be growing in Iraq. That is a military concern. On the other hand, it may not be growing. It may be "merely" holding its own. What is unquestionably growing is the terrorist movement. That is of much wider and more profound concern than military. Because terrorism is growing in Iraq, it is easy to confuse the terrorists with the insurgents. It is easy to assume that once America leaves Iraq, the terrorists will fade away, along with the insurgents. This expectation has about as much validity as the neo-conservatives' expectation in February, 2003, that our troops would be greeted as liberators by the broad mass or Iraqis. It is, in short, a pipe dream -- and there is some funny-smelling stuff is in the pipe. CONCLUSION Terrorists are like sharks: they follow the scent of blood. When terrorist tactics appear to be undermining people's trust in the existing social order's ability to defend stability, these tactics spread. The goal of the terrorist is messianic: the replacement of the existing social order with a new one, rarely described and never presented in blueprint form. The enemy is real: existing society. The reform is vague: mostly positive adjectives. Positive adjectives in the minds of terrorists make for intensifying adverbs. With respect to the work of terrorists, they've only just begun. So have the counter-terrorists. Bad mojo. From checker at panix.com Sat Jul 9 00:05:17 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:05:17 -0400 (EDT) Subject: [Paleopsych] Gary North: Time Wasting: Good vs. Bad Message-ID: Gary North: Time Wasting: Good vs. Bad Gary North's REALITY CHECK Issue 458, June 24, 2005 We will switch to a new mail server next week. Don't worry; there will be no glitches. This is digital! But if, due to circumstances beyond our control, you stop receiving this letter on Tuesdays & Fridays, and if you still want to receive it, sign up again by sending a request to: reality at dailyreckoning.com Here is a reason to stay on this mailing list. You get goodies like the following. . . . Occasionally, things go right for the good guys and wrong for the bad guys. Here is a case in point. We could use a lot more like this one. http://shurl.org/bigmistake TIME-WASTING: GOOD VS. BAD What I am now experiencing in one isolated aspect of my life's work, you may already have experienced or should experience. I hope you can learn from my mistakes. I also hope you can find a way to overcome your comparable mistakes inexpensively. As you read this, you may think, "That sounds a lot like my experience with. . . ." You may even encounter a "shock of recognition" -- a phrase that usually refers to a moment of self-awareness in which a person recognizes the reality of what he has become or was. I had one of those shocks this week. It was the result of my first evening spent enduring a long, miserable task that I had better accomplish: the cleaning out of my files of clippings. I began to clip magazines and newspapers in the mid- 1960s. This escalated a decade later, and became (I now see) maniacal from about 1980 to 1996. Today, I have a dozen 4-drawer legal files filled with clippings. The categories are in the hundreds. My mania ended with the advent of the Web. I stopped subscribing to magazines and newspapers. (I still subscribe to financial newsletters.) Yet I did not save to disk most of the files that I have read on-line. I do save a few documents this way, now that there is a free disk- search-and-retrieve program that I like: Copernic's. http://copernic.com But saving Web-based pages is not a mania for me. That's because I can find more material than I have time to read on almost any topic simply by using Google. I am in the process of moving. I want to get rid of half my filing cabinets. This means getting rid of at least half my files. Whether I will attain this goal is problematical. It took me two hours to go through the equivalent of one file drawer. That means 47 to go. It may be wiser to just put up with the files. Or maybe I should just toss out all of the files. That's what my father-in-law did two decades ago, after 40 years of clipping stuff. At the time, I thought he had made a big mistake. I no longer do. WASTED TIME Every life is filled with wasted time. Every job is also filled with wasted time. We spend much of our lives collecting, learning, cataloguing, and generally burying ourselves in information that later turns out to be trivia. It took me two hours just to skim through two boxes of clippings. I tossed out a pile of papers about 11 inches high. Most of these were yellowed newspaper clippings. I took one look and tossed most of them. Yet to file each one, I had to clip it, paste the columns on a sheet of paper, label the paper, and file it. That took time. I hate to think about how much time it took. In the bad old days, I had only one category per document. Yet most documents should have been identified by several key words. A clipping on OPEC could have been filed under "Energy: Oil," "foreign policy," or "cartels." A clipping on Henry Kissinger could have been filed under "Kissinger," "foreign policy, "conspiracy," "war," or "myopic weasel." I had to make a choice. Then I had to remember that choice. On the whole, I have been able to retrieve old articles reliably, if I could recall the article. But, after 1996, I found that I rarely went to my old files. So, I have forgotten many categories and most clippings: thousands and thousands of clippings. I did not know that the Web would arrive. I did not know that the way we access new knowledge and details of fading memories would change more than it changed since the invention of the printing press. I knew by 1982 that scanning software would someday allow me to file my clippings on a hard disk. I knew that data-retrieval software would eventually enable me to retrieve whatever I had filed. I did not think it would take over a decade to produce workable but inefficient products that would do this. PageKeeper was one of them. It did not catch on. Nothing really works well yet. If it did, Microsoft would either buy it or imitate it. So, I kept clipping. I kept buying filing cabinets. Now I will toss out most of what I filed. As I read file after file, I thought, "Why did I bother to clip this?" There were exceptions. I am keeping about half, but most of these are longer articles clipped from magazines. The newspaper clippings are mostly too narrow or too obscure or long since superseded. As I write this, I recall my advice to a friend who also suffered with clipping mania. He told me in the mid- 1980s that he was facing a major problem: a mild heart attack. His physician told him to slow down. Yet he was still compulsively clipping. I told him to let every newspaper pile up for a week. Then start reading them. I predicted that most of what he would have clipped a week before would have become outdated. He did what I said. He reported back within a month that he was clipping far fewer articles. But I did not take my own advice for a decade. The Web forced my hand, not a heart attack. BUT NOT COMPLETELY WASTED TIME Nevertheless, my files have convinced me that all that time was not wasted. The discipline of reading, clipping, and cataloguing articles did provide me with an overall sense of what was going on. Now that I see them, I remember some of it. I can see in one quick survey what developed. A few of the clippings are still worth keeping. Decades ago, Linus Pauling told his assistant Art Robinson that there is value in reading widely and trying to remember seemingly unrelated facts. Pauling said that the mind will sometimes come up with links to memories that will prove useful. Pauling was correct. We don't know how our minds make these connections. They just pop up. We recall a related incident, the way I just recalled my friend with the heart attack. We say, "garbage in, garbage out." But we don't know today what will turn out to be garbage later. We know that most of it will be, but we cannot accurately predict what. So, we fill our minds with useless stuff. Then we forget most of it. That's what files are for. In the old days, when I decided to write on some topic, I would skim through a file to see what I had clipped. But if the file was really fat, or if there were several fat files on the same topic, I would rarely do this. It was too much work. I would rely on my memory to pop up some recollection. Then I would go to a file and go through it in search of the document. If we only knew what the future will bring... we could avoid so much garbage. ANOTHER IMMENSE "FILE" Technology keeps advancing. As it does, it makes earlier technologies that were once on the cutting edge of progress turn obsolete. It also makes hash of our plans and our investments. My entire career has been tied to books. But the format of books is changing: from paper-based to digit- based. There will come a day when I will have a lightweight, book-sized reader with a screen as easy to read as a 12 dots per inch piece of paper. It will enable me to highlight a passage, file it electronically, attach key words to it, and subsequently identify what book and page it came from. Its non-availability today is a matter of price, not technology. When such a product is offered for sale below $500, and when a common format makes digital books legally available for reading, my research will change: no more yellow highlighters, no more photocopies, no more filing cabinets, and no more dependence on my fading memory. I suspect that the product's main limiting factor today is not hardware or software but rather copyright law. There is also the problem of converting entire libraries to a digital format that will be readable by multiple hardware products. Google has begun the work of conversion, and just in time: acid-based paper books printed before 1880 are disintegrating. It is only a matter of time before Harvard and Stanford will have no library advantage over Podunk State for anything published before 1923: public domain. If the newly independent Republic of Freedonia decides to ignore international copyright law, Harvard and Stanford will have no library advantage over you and me for $200 a year. Today, I can walk into a university library and go on- line. Only rarely do libraries require student passwords. I can access any scholarly journal that the library subscribes to. I can find an article and e-mail it to myself. I can then file it on my own computer. Students can access Lexus/Nexus, which is a database of newspaper and magazine articles. If I can't gain access (and I probably can), I can pay a student $6 an hour to research any topic. Or I just find a cooperating student who lets me use his/her password. If I want to write a book in three weeks, the way I wrote "The War on Mel Gibson," I can do it, cheap. I own a library. It is housed in a 3,000 square foot facility. It has 100 bookcases, seven shelves per case. I am now about to give away 80% of it. The Web has changed the way I work. I have not decided on which institution should get it. Along the far wall is a boxed collection of microcards. On these cards are readable imprints of everything published in the United States from 1639 to 1811: books, pamphlets, sermons, newspapers. It is called "Early American Imprints." I used this set in my university's library, 1969-71 to write my Ph.D. dissertation, "The Concept of Property in Puritan New England, 1630-1720." In those days, only a few graduate research libraries had this set. In the late 1980s, microfiche replaced microcards as the preferred technology. Microfiche can be used to produce printed pages. Microcards cannot. The company that produced the cards was about to use them as landfill. A man found out and went to the managers. He offered to buy them. He agreed to sell them to individuals and organizations that would not buy the new microfiche version. The outfit agreed. My non-profit research organization bought the main set for $5,000, and I paid even more for the newspapers. This was a mistake. I never got back to the Puritans. Now, microfiche technology is obsolete. The first set of materials is available to libraries on-line. The material is searchable. How, I don't know. The images are poor and the type face is ancient. "The letter "s" looks like an "f," except at the end of a word, when it looks like an "s." The newspapers are not yet digitized. I offered to give away the collection to a college library. The librarian politely refused. He is even dumping hard copies of scholarly journals, as are most librarians. Microcards are dinosaurs. So, I will probably go on eBay and offer the collection to home schoolers or Christian day schools. What cost $18,000 in 1988 will get a couple of thousand, maybe. But in 1965, the same collection cost probably $50,000, worth six times as much today. Technology giveth and technology taketh away. IF WE KNEW THE FUTURE Entrepreneurship is the art of forecasting the price of things, and then buying items today that will be worth more later, and selling items today that will be worth less. We pay in time for things that will appreciate or depreciate. Money is replaceable. Time isn't. When we waste time, it usually costs us more in the long run than when we waste money. The allocation of time is the most difficult of our responsibilities. I regret having bought those microcards. I did not put them to good use. But I regret far more the investment of time in assembling all those clippings. I see them piling up, and I wonder, "What was I thinking?" Yet there are a few items that I am glad I saved. I think they will prove useful someday in my writing. But who knows? Maybe I should toss out all of the files. It would save about 150 hours of work. But to do that would be to acknowledge that my future will be very different from my past. Nobody likes to make that kind of complete break with the past. In the movie, "About Schmidt," there is a scene where Jack Nicholson returns to his former place of employment, an insurance company. He spent his career in the accounting department. As he passes by the building, he sees his files in the basement, ready for the dumpster. In these boxes rest the visible results of his life's work. They were important to him at the time, but they are about to be tossed away. What about Schmidt? Was he the equivalent of those files? The script writer did a good job with that scene. He did not have to have Nicholson verbally ponder his own worth. It was crucial to the movie that he not do this. The character was not about to admit to himself or others that his life in retrospect seems to have been, if not wasted, then not significant. But for most people, the illusion of their cultural significance doesn't last long. There is nothing like attending a funeral to remind us of this. But tossing out files comes close. ILLUSIONARY OCCUPATIONS Because I had something close to mindless to do, I watched the TV show on the 100 most famous lines in movies. There was no doubt in my mind which line would be number one, any more than I had a doubt regarding its predecessor on the most famous song. The song had to be "Over the Rainbow," and it was. The line had to be "Frankly, my dear. . . ." The only reason for watching that sort of show is to find out the also-rans. Far and away, the most important line in the history of the movies was written by William Goldman. Several of Goldman's lines made the top 100, but not his most important one. That line was put into the mouth of Deep Throat in "All the President's Men." It has become legendary: "Follow the Money." It does not appear in the book. So, Goldman's greatest line gets no respect. Hardly anyone knows that he wrote it. Movie actors may imagine that they will exert influence. A few of them may, if they get the right parts. But they do not speak their own lines, and hardly anyone recalls the names of the script writers of any specific line, unless it's a re-make of a play by Shakespeare. Writers imagine that they will be remembered, but here is the grim reality. First, hardly anyone reads old novels, except when they are assigned in an English class. Second, nobody reads old non-fiction books, except when they are assigned in a history class. The Great Books make Great Shelves, but hardly anyone ever takes one of them down from the shelf to read it in order to gain greater wisdom. I sat down and listed books over a century old that have influenced my thinking directly, as distinguished from the influence of some contemporary who said the book is important. The list is incredibly short. De Tocqueville's "Democracy in America" is one, but I only finished both volumes a couple of years ago. I had read in it in grad school. His "Old Regime and the French Revolution" influenced me: one main idea. Burke's "Reflections on the Revolution in France" (1790) is on my list. Bastiat's "The Law" (1850) influenced me, but it is really a long essay. http://shurl.org/bastiat The information in old books gets superseded very fast. If a successor does not pick it up and run with it, or if he has no successor, a book will die. Virtually all old books have died. They are read, not for wisdom but to find out what some author said and what influence he had, way back when. So, authors may enjoy the illusion of having produced a stand-alone masterpiece, but it's still an illusion. Great artists have a shot at this. Nobody else does. But there are few great artists around today, as far as we can see. CONCLUSION We are all in the same boat. Our lives are filled with what appears to be waste. Yet the waste seems necessary for whatever productivity we add. We cannot eliminate waste. At best, we can minimize it. Goal-setting and time-management are techniques that help us reduce waste. But no matter what we do, most of what we do seems to subtract from the legacy we leave behind. If this is true, then we might as well accept waste. Somehow, it is an inescapable part of our lives. Waste contributes to our production. So, it's not really all waste. It's just whatever is unaccounted for in our overall production process. I budget waste into my life. I recognize that some of my time will be spent on what appears to be unproductive details. We can increase our output by acknowledging the reality of waste and dealing with it. It's like cholesterol. There is good waste and bad waste. So, I have budgeted in an hour a day for tossing out clippings. But I have decide that it's not for saving file cabinet space. It's for coming across an occasional gem, and hoping that my fading memory will retain it. You know the story: the hope for a pony in the pile of waste. I suggest that you pick a project like this and complete it. If nothing else, it's a good reminder of how few ponies there are in life. We should learn to appreciate them. From checker at panix.com Sat Jul 9 00:05:27 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:05:27 -0400 (EDT) Subject: [Paleopsych] Wired: Sam Jaffe: Giving Genetic Disease the Finger Message-ID: Sam Jaffe: Giving Genetic Disease the Finger http://wired.com/news/print/0,1294,68019,00.html 5.7.5 Scientists are closing in on techniques that could let them safely repair almost any defective gene in a patient, opening the door for the first time to treatments for a range of genetic disorders that are now considered incurable. The breakthrough, announced in the journal Nature in June, relies on so-called zinc fingers, named after wispy amino acid protuberances that emanate from a single zinc ion. When inserted into human cells, the fingers automatically bind to miscoded strands of DNA, spurring the body's innate repair mechanism to recode the problem area with the correct gene sequence. A method for fixing miscoded DNA by injecting foreign genes into cells won headlines three years ago when doctors in France and Britain announced a handful of successful cures related to X-linked severe combined immunodeficiency disease, or SCID, also known as "bubble boy" disease. But that method was ultimately proven unsafe. In a paper published earlier this month, scientists at California biotechnology company Sangamo BioSciences showed that zinc fingers can be used to erase targeted portions of DNA without risk of harmful side effects. "This doesn't just deliver a foreign gene into the cell," said Nobel Prize winner and CalTech President David Baltimore, who with a Sangamo paper co-author Mathew Porteus proposed this method to cure genetic diseases. "It actually deletes the miscoded portion and fixes the problem." At the heart of the breakthrough is the concept of "if it's broke, break it some more." Cells have a method of DNA repair called homologous recombination, which fixes breaks in the double helix of our chromosomes. But the process only repairs places where the DNA has been cut, not where genes have been miscoded. Using a package of synthesized zinc fingers, cells can be tricked into doing nano-surgery on their own genes, Sangamo researchers found. The zinc fingers home in like a guided missile on the exact spot in the genome doctors are trying to target and then bind to it. DNA-devouring enzymes then cut through the double helix of DNA at the exact beginning and end of the targeted gene, and a template of donor DNA helps rebuild the deleted strand. While such a therapy has been theorized for years by Baltimore and others, Sangamo scientists are the first to show test-tube results with human cells. In a paper published June 2, Sangamo researchers showed how they were able to correct the defective gene in 18 percent of the T-cells extracted from the body of an X-linked SCID patient. That should be enough to cure the disease, as it only takes one corrected T-cell to repopulate a person's immune system with healthy cells, according to Sangamo. If successful in trials, Sangamo's technology would be the first successful gene therapy, three decades after the concept of curing diseases by tinkering with the genome was first proposed. Most gene therapy trials have failed because the methods of inserting new genes into cells (usually with modified viruses as vectors) haven't proved to be effective enough. One trial that did succeed, but then ended in tragedy, was a 2002 French X-linked SCID trial that used retroviruses to deliver a new gene into the patients. The new gene cured the disease in 12 patients, but went on to cause leukemia in three of them. It turned out the foreign gene, in addition to producing the protein that vanquishes X-linked SCID, had the unexpected side effect of sometimes turning on a cancer-causing gene. Sangamo's technology overcomes that problem. Whereas the French viruses inserted the foreign gene randomly into the host cell's genome, the zinc fingers are highly specific and can land only at the targeted gene. "They've certainly raised the bar for gene-therapy safety," said Scott Wolfe, a zinc-finger researcher at the University of Massachusetts Medical School in Worcester, Massachusetts. He points out that the early proof-of-principle work was highly toxic to the cells. The zinc fingers weren't specific enough and they created so many double-stranded breaks in the DNA that a lot of the cells chose to commit suicide rather than try to repair all the breaks. "They really seem to have solved the toxicity problem altogether." Although X-linked SCID patients will probably be the first to try the therapy, the technology is extremely versatile for a host of human diseases. "Right now, its greatest weakness appears to be that it is optimized for very small patches of gene repair," said Baltimore. "If it's a long sequence of DNA that has to be fixed, this might not be the best way to do it." Nevertheless, there are a lot of ways to attack diseases without replacing whole genes. Other potential targets for the therapy range from many types of cancer to cystic fibrosis and even AIDS. "If they can figure out how to optimize their zinc fingers for any spot on the genome, this could target any gene you want it to," said Wolfe. From checker at panix.com Sat Jul 9 00:05:59 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:05:59 -0400 (EDT) Subject: [Paleopsych] NYT: Who Stole Sleep? The Pillow as Perp Message-ID: Who Stole Sleep? The Pillow as Perp New York Times, 5.7.7 http://www.nytimes.com/2005/07/07/fashion/thursdaystyles/07online.html By MICHELLE SLATALLA SHORTLY before midnight the other night, someone leaving the movie theater a block and a half from my house dropped car keys on the pavement. Actually, I don't know if the jingling came from keys or for that matter if there even was jingling, because like most mortals I slept through the incident. Even my dog Sticky, who has ears big enough to interest NASA, continued to snore. But not my husband. "What was that?" he shrieked, sitting upright as if bitten by a snake. The princess who could not sleep on a pea had nothing on my husband. His rest is often disturbed by distant barking, other people's air-conditioners and "something that sounds like a mosquito, only it never stings me." For years I snoozed through the drama. But recently neck and back twinges have begun to surge through my husband like electrical currents and have prompted him to jump out of bed, switch on the light and hop around. Even I can't sleep through that. I'm not the first spouse to stumble blearily toward the computer at 2 a.m. in search of a solution. But in the lonely predawn hours, as I considered the Internet's various suggestions - from sleep masks like the foldable Dreamlite Relaxation model ($6.95 at [3]Dreamessentials.com) to white noise machines like the Marsona Sleep Mate 980 ($52.95 at [4]Naturestapestry.com) - one possibility intrigued me above all others. Maybe we needed better pillows. Ours, old and flat and musty, provided about as much neck support as a saltine. Could the pillows be sabotaging my husband's sleep? "A pillow is important if a person has poor sleep to begin with," said Dr. Clete A. Kushida, director of the Stanford University Center for Human Sleep Research. "The environment is important." But which pillow? There are no official standards for pillows and no research to prove that one type is better than another. "The studies haven't been done," Dr. Kushida said. "Basically it comes down to what is the most comfortable." Depending on who you are, that might mean [5]llbean.com's goose down damask pillow (in sizes from standard to king and in fills ranging from soft to firm, $49 to $99). Or Overstock.com's Circle of Down pillow ($29.99). Or [6]Livingincomfort.com's hypoallergenic pillow ($17.88). Or maybe the answer is a synthetic pillow from [7]Bedbathandbeyond.com (from $7.99 for the Jumbo Gusset to $79.77 for the standard-size Indulgence Supreme Thermo-Sensitive). I needed guidance. "How can I tell if my husband is a Thermo-Sensitive type or a Circle of Down man?" I asked Dr. James Maas, a professor and sleep researcher at Cornell University. "Given all the options, I wonder if anyone is sleeping on the right pillow." Professor Maas said that pillow issues affect a great number of Americans. "Somewhere near 50 percent of the country is sleep deprived," he said during a phone interview. "This country is a country of walking zombies, mostly due to sleep length but also to poor quality of sleep." Professor Maas recommended that my husband cut back on his caffeine intake and that we create a bedroom that was cool, dark and comfortable (which I figured was a nice way of saying our 95-pound dog should get off the bed). As for pillows, the difference comes down to down versus synthetic fill, Professor Maas said. A good pillow of either stuff should last up to 10 years, he said. You can test your pillow to find out if it's past its prime. "You take your pillow," he said. "Fold it in half. If it doesn't spring forward and open instantly by itself, you've got a dead pillow. Replace it." Professor Maas said he liked the quality of pillows manufactured by United Feather and Down, an Illinois company whose products, both down and synthetic, sell under various private labels. For instance United Feather and Down's Insuloft down and PrimaLoft synthetic-fill pillows are for sale online at [8]thecompanystore.com, [9]Landsend.com, [10]Potterybarn.com and llbean.com. "We do a wide variety of fills," said Becky McMorrow, United Feather and Down's marketing manager. "Every retail customer tweaks the pillow to have an exclusive style. Restoration Hardware and Williams-Sonoma use the same fill but not the same fabric cover. Pottery Barn used a damask stripe." No matter where you shop, expect to pay from $29 to $59 for a good synthetic pillow and from $59 to $129 for a goose down pillow with a minimum of 550 fill power, Ms. McMorrow said. "Fill power is a measurement of how lofty an ounce of down is and how high it comes up on a beaker after it's compressed," Ms. McMorrow said. After ascertaining a few facts about my husband - mostly sleeps on his side, switches back and forth between a flat pillow and a fluffier one as the night progresses - Ms. McMorrow mailed me four models to test. I did not feel it necessary to mention the experiment to him; he has enough on his mind. The first night, I discreetly slipped a PrimaLoft synthetic-fill Side Sleeper into his pillowcase. Gusseted to provide an even sleep surface and neck support for a side sleeper, it was similar to a $39 model from Bedbathandbeyond.com. Then I turned out the light and lay poised to take notes as he fell into an immediate deep sleep. Thirty minutes passed without a peep out of him. Then 60. Then I fell asleep. The next night I introduced the fluffier Insuloft down-filled Side Sleeper (very like a $99 version at [11]Realgoods.com). After 30 minutes he sat up and asked suspiciously, "Do I hear a raccoon?" From this I deduced that while both Side Sleepers provided neck support, he preferred the denser texture of synthetic fill. The third night he also slept well on a down-synthetic blend called the Lyocell (similar to a pillow sold at thecompanystore.com for $89). By the fourth night I was the one who had earned the right to sleep on the Face Saver with "aloe-soft fabric" to prevent wrinkles. The pillow is to go on sale this fall on the Home Shopping Network for about $35. The conclusion? I bought all the pillows, because all four were an improvement over our old ones. The dog thought so, too. E-mail: [12]slatalla at nytimes.com From checker at panix.com Sat Jul 9 00:07:59 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:07:59 -0400 (EDT) Subject: [Paleopsych] Tullock Rules! Message-ID: Yesterday I circulated a 1997 New York Times article that estimated the value of the ecosystem as worth $33 trillion. Someone asked me in comparison to what. Well, annual World Domestic Product is about $30 trillion. The value of human and other capital is ten times annual WDP, or $300 trillion, and my guess was corroborated by googling. In this country at least, according to Robert Fogel, human capital is about two-thirds of the total. But humans value themselves more highly than capital markets do, as evidenced by various decisions people make on paying to avoid death at various probabilities. It turns out to be $2-4 million per person in the US. Call it $1 million for the generally much less rich world. So the value of people as estimated by themselves is 6 billion times $1 million per person, or $6 quadrillion. Five percent of this is $300 trillion. Now Gordon Tullock says we are 95% selfish and 5% altruistic. He based his calculations on the fact that Americans allocate 5% of their incomes in the form of net downward redistribution. This has remained constant for 140 years and does not depend on whether the redistributing is done primarily privately or by local governments or primarily by state and national governments. Here's a different confirmation of the same. But the ancient Hebrews had already articulated the Tullock Five Percent Rule, when they said: The Earth from God we do but rent, And all he asks is ten percent. Related articles to follow. From checker at panix.com Sat Jul 9 00:08:07 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:08:07 -0400 (EDT) Subject: [Paleopsych] Wiki: Value of Earth Message-ID: Value of Earth http://en.wikipedia.org/wiki/Value_of_Earth In [6]economics, value of [7]Earth is the ultimate in [8]ecosystem valuation, and important to [9]value of life calculations. It begins with the simple problem that if the Earth ceases to support life, and human life does not continue elsewhere, all economic activity will also cease. There are several ways to estimate the value of Earth: * Estimate the [10]value of life for everything that lives on it, and assign the Earth, as a necessary component and home for that life, the [11]natural capital on which [12]individual capital thrives, at least this much value. Since not all life is valued, and a very little is overvalued, there is high risk of under-estimation. One way to avoid this is to work continent by continent to see if there is systematic inflation of the price of life on some compared to the others. * Estimate the cost of [13]replacing the Earth, which may include finding and colonizing another planet, or creating one artificially in a compatible orbit. What if the natural capital of a nearby planet, e.g. [14]Mars, were to compete? What would be the cost of [15]terraforming it to make it as comfortable as Earth? Or even barely habitable? An issue is whether to count transport costs. + As a variation, estimate the cost of a smaller habitat, such as [16]Biosphere 2, and multiply its cost by the ratio between the population of Earth and of that smaller habitat. This however is to rely on below-minimum cost figures, since Biosphere 2, although brilliantly ambitious and expensive, was a flop. This method yields only a floor value which Earth itself would vastly exceed. See below for more details. + As another variation, figure out every disaster that might occur due to failure of the [17]biosphere, to lesser or greater degree, and calculate the price of [18]insurance against all of it. The averted insurance payments are effectively a yield, and, this is one way to calculate the value of what Earth is doing for us, for as long as these averted failures do not occur. * Calculate the yield of [19]natural capital, as [20]nature's services, and use the size and consistency of this yield to calculate how much capital there must be. This method was pioneered by [21]Robert Costanza and is promoted in [22]Natural Capitalism. As one might expect, these all produce quite high values for the entire Earth, usually at least in the hundreds of quadrillions of [23]US dollars. This seems appropriate. However, even with this sum in hand, it seems unlikely that even [24]experienced reconstruction subcontractors could complete the task of replacing Earth, certainly not without using Earth itself as a base. Rent for use of Earth and its orbit might then also have to be included, and it would be hard to price this without calculating the price of the Earth, again. One way around this is to simply declare the Earth [25]priceless or to be exactly and only as valuable as all [26]financial capital in circulation. This may be equivalent to declaring it [27]worthless however, as [28]economics deals very poorly with assets that are too valuable to trade actively in markets. Replacement methods Returning to the calculation in terms of the replacement cost of Earth's biosystems: In Biosphere2 over $240 Million was spent on developing the infrastructure to support 8 people for two years. The project failed and fresh air had to be pumped in to save the lives of the participants. So Earth is worth at least ( $240 M / 8 people ) X 6.5 Billion people on Earth = 1.95 ? 10^17 dollars. This represents the minimum value of the Earth using today's technology. Because the project failed, the true value must be higher than this amount. To put this into perspective, assuming the total value of the world's [30]GDP is $30 Trillion, that sum divided into $ 1.95 ? 10^17 = 6.5 thousand times the world's current GDP. From this we can estimate the cost of cutting a tree or taking a single fish from the ocean if there is evidence that that yielded resource unit may not be replaced. The probability that the resource will be replaced reduces the cost, so a 50% chance that it will be replaced implies cutting in half the cost, since two of them can be taken, on average, before it isn't replaced by [31]nature's services. These estimates can be done using a straight line method, for initial estimates, or using an exponential to place greater value on the remaining elements of a declining resource. Further calculation of the value of one tree, replaced or otherwise, a metric ton of fish, of soil carbon, depend on these probabilities. The curve for Replaced and Not Replaced biomass will be relatively equivalent as long as the total biomass is relatively large. As the total biomass in a specific area becomes depleted to the point where the entire sustainability of the biomass is threatened, then the exponential part of the curve comes into play. Ultimately, we are left with the question, how much are we prepared to pay in order to avert imminent death as individuals. That sum is relatively large. As the resources are depleted to the point where the conflict over what remains begins to dominate the risk of taking it, it becomes more obvious due to costs of protection and securing property. So, any calculation based on costs of replacing ecosystems tends to lead to a calculation based on costs of protecting ecosystems so that their yield can be controlled - but only at the tail end of the process, when it is too late to replace them. There are implications for costs of [32]national security and [33]climate change, both of which may have to be counted as full [34]factors of production in such an analysis, if not full [35]styles of capital - a factor which if not present in tight parameters prevents all gains from all investment in production. See Also * [37]Earth Day * [38]World Ocean Day * [39]World Water Day [41]Categories: [42]Free-market environmentalism | [43]Sustainability References 6. http://en.wikipedia.org/wiki/Economics 7. http://en.wikipedia.org/wiki/Earth 8. http://en.wikipedia.org/wiki/Ecosystem_valuation 9. http://en.wikipedia.org/wiki/Value_of_life 10. http://en.wikipedia.org/wiki/Value_of_life 11. http://en.wikipedia.org/wiki/Natural_capital 12. http://en.wikipedia.org/wiki/Individual_capital 13. http://en.wikipedia.org/wiki/Replacing_the_Earth 14. http://en.wikipedia.org/wiki/Mars_%28planet%29 15. http://en.wikipedia.org/wiki/Terraforming 16. http://en.wikipedia.org/wiki/Biosphere_2 17. http://en.wikipedia.org/wiki/Biosphere 18. http://en.wikipedia.org/wiki/Insurance 19. http://en.wikipedia.org/wiki/Natural_capital 20. http://en.wikipedia.org/wiki/Nature%27s_services 21. http://en.wikipedia.org/w/index.php?title=Robert_Costanza&action=edit 22. http://en.wikipedia.org/wiki/Natural_Capitalism 23. http://en.wikipedia.org/wiki/US_dollar 24. http://en.wikipedia.org/wiki/Halliburton 25. http://en.wikipedia.org/wiki/Priceless 26. http://en.wikipedia.org/wiki/Financial_capital 28. http://en.wikipedia.org/wiki/Economics 30. http://en.wikipedia.org/wiki/Gross_domestic_product 31. http://en.wikipedia.org/wiki/Nature%27s_services 32. http://en.wikipedia.org/wiki/National_security 33. http://en.wikipedia.org/wiki/Climate_change 34. http://en.wikipedia.org/wiki/Factors_of_production 35. http://en.wikipedia.org/wiki/Capital_%28economics%29 37. http://en.wikipedia.org/wiki/Earth_Day 38. http://en.wikipedia.org/wiki/World_Ocean_Day 39. http://en.wikipedia.org/wiki/World_Water_Day 40. http://en.wikipedia.org/wiki/Value_of_Earth 41. http://en.wikipedia.org/w/index.php?title=Special:Categories&article=Value_of_Earth 42. http://en.wikipedia.org/wiki/Category:Free-market_environmentalism 43. http://en.wikipedia.org/wiki/Category:Sustainability 44. http://en.wikipedia.org/wiki/Value_of_Earth 45. http://en.wikipedia.org/wiki/Talk:Value_of_Earth 47. http://en.wikipedia.org/w/index.php?title=Value_of_Earth&action=history 48. http://en.wikipedia.org/w/index.php?title=Special:Userlogin&returnto=Value_of_Earth From checker at panix.com Sat Jul 9 00:08:11 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:08:11 -0400 (EDT) Subject: [Paleopsych] Progressive Policy Institute: Trade Is an Increasing Share of the New Economy Message-ID: Trade Is an Increasing Share of the New Economy http://www.neweconomyindex.org/section1_page03.html WHY IS THIS IMPORTANT? The dramatic expansion of trade means more robust competition, which makes constant innovation more critical to success. For that reason, globalization has accelerated industrial and occupational restructuring, leading to the decline of some industries and jobs, and the growth of others. One indicator of the extent of the trend toward globalization is the growing value of exports and imports as a share of the economy. THE TREND: Trade has become an integral part of the United States' and world economies. U.S. exports and imports have increased from 11 percent of GDP in 1970 to 25 percent in 1997. Moreover, the United States is increasingly specializing in more complex, higher value-added goods and services, as reflected in the fact that the average weight of a dollar's worth of American exports is less than half of what it was in 1970. World exports increased from $1.3 trillion in 1970 to $4.3 trillion in 1995, in constant dollars. And globalization may be about to move up to a new level. Jane Fraser and Jeremy Oppenheim, of the consulting firm McKinsey & Company, have estimated that the value of the world economy that is "globally contestable," which is to say open to global competitors in product, service, or asset ownership markets, will rise from about $4 trillion in 1995 (approximately a seventh of the world's output) to more than $21 trillion by 2000 (about half of world output). According to Fraser and Oppenheim, "We are on the brink of a major long-term transformation of the world economy from a series of local industries locked in closed national economies to a system of integrated global markets contested by global players."^[28]11 This growth will be driven by global capital markets, reduced economic and trade barriers, and perhaps most importantly, technological change, which makes it easier to locate enterprises and sell products and services almost anywhere. For example, online brokerages like E-Trade or Charles Schwab are just as accessible from Singapore or New Zealand as they are from the United States. THE DATA:^[29]12 References 28. http://www.neweconomyindex.org/endnotes.html#11 29. http://www.neweconomyindex.org/endnotes.html#12 From checker at panix.com Sat Jul 9 15:48:48 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:48:48 -0400 (EDT) Subject: [Paleopsych] Scientific American: Natural-Born Liars Message-ID: Natural-Born Liars http://www.sciam.com/print_version.cfm?articleID=0007B7A0-49D6-128A-89D683414B7F0000 May 18, 2005 Thanks to Alice Andrews for this. What I want to know is why mechanisms for detecting *self*-detection haven't evolved as well. Maybe they have, to a certain extent. But the deception can go many layers deep. I might grow up in a culture that believes that Mahomet is Allah's only prophet. If I belong to the vast majority that never really question this statement, then I'm not really all that self-deceiving. No doubt I should be a skeptic, but there are too many things to be skeptical about. And no brain mechanism has direct access to the truth. What might be detected is that I am little bit *too* sincere in my protestation of faith. In sum, mechanisms for detecting self-deception in others are too costly to develop in most cases. Also, it takes time for such mechanisms to evolve successfully, for there's a race between ever more subtle means of (both other- and self-) deception and the detection of that deception. -------------------- Why do we lie, and why are we so good at it? Because it works By David Livingstone Smith Deception runs like a red thread throughout all of human history. It sustains literature, from Homer's wily Odysseus to the biggest pop novels of today. Go to a movie, and odds are that the plot will revolve around deceit in some shape or form. Perhaps we find such stories so enthralling because lying pervades human life. Lying is a skill that wells up from deep within us, and we use it with abandon. As the great American observer Mark Twain wrote more than a century ago: "Everybody lies ... every day, every hour, awake, asleep, in his dreams, in his joy, in his mourning. If he keeps his tongue still his hands, his feet, his eyes, his attitude will convey deception." Deceit is fundamental to the human condition. Research supports Twain's conviction. One good example was a study conducted in 2002 by psychologist Robert S. Feldman of the University of Massachusetts Amherst. Feldman secretly videotaped students who were asked to talk with a stranger. He later had the students analyze their tapes and tally the number of lies they had told. A whopping 60 percent admitted to lying at least once during 10 minutes of conversation, and the group averaged 2.9 untruths in that time period. The transgressions ranged from intentional exaggeration to flat-out fibs. Interestingly, men and women lied with equal frequency; however, Feldman found that women were more likely to lie to make the stranger feel good, whereas men lied most often to make themselves look better. In another study a decade earlier by David Knox and Caroline Schacht, both now at East Carolina University, 92 percent of college students confessed that they had lied to a current or previous sexual partner, which left the husband-and-wife research team wondering whether the remaining 8 percent were lying. And whereas it has long been known that men are prone to lie about the number of their sexual conquests, recent research shows that women tend to underrepresent their degree of sexual experience. When asked to fill out questionnaires on personal sexual behavior and attitudes, women wired to a dummy polygraph machine reported having had twice as many lovers as those who were not, showing that the women who were not wired were less honest. It's all too ironic that the investigators had to deceive subjects to get them to tell the truth about their lies. These references are just a few of the many examples of lying that pepper the scientific record. And yet research on deception is almost always focused on lying in the narrowest sense-literally saying things that aren't true. But our fetish extends far beyond verbal falsification. We lie by omission and through the subtleties of spin. We engage in myriad forms of nonverbal deception, too: we use makeup, hairpieces, cosmetic surgery, clothing and other forms of adornment to disguise our true appearance, and we apply artificial fragrances to misrepresent our body odors. We cry crocodile tears, fake orgasms and flash phony "have a nice day" smiles. Out-and-out verbal lies are just a small part of the vast tapestry of human deceit. The obvious question raised by all of this accounting is: Why do we lie so readily? The answer: because it works. The Homo sapiens who are best able to lie have an edge over their counterparts in a relentless struggle for the reproductive success that drives the engine of evolution. As humans, we must fit into a close-knit social system to succeed, yet our primary aim is still to look out for ourselves above all others. Lying helps. And lying to ourselves--a talent built into our brains--helps us accept our fraudulent behavior. Passport to Success If this bald truth makes any one of us feel uncomfortable, we can take some solace in knowing we are not the only species to exploit the lie. Plants and animals communicate with one another by sounds, ritualistic displays, colors, airborne chemicals and other methods, and biologists once naively assumed that the sole function of these communication systems was to transmit accurate information. But the more we have learned, the more obvious it has become that nonhuman species put a lot of effort into sending inaccurate messages. The mirror orchid, for example, displays beautiful blue blossoms that are dead ringers for female wasps. The flower also manufactures a chemical cocktail that simulates the pheromones released by females to attract mates. These visual and olfactory cues keep hapless male wasps on the flower long enough to ensure that a hefty load of pollen is clinging to their bodies by the time they fly off to try their luck with another orchid in disguise. Of course, the orchid does not "intend" to deceive the wasp. Its fakery is built into its physical design, because over the course of history plants that had this capability were more readily able to pass on their genes than those that did not. Other creatures deploy equally deceptive strategies. When approached by an erstwhile predator, the harmless hog-nosed snake flattens its head, spreads out a cobralike hood and, hissing menacingly, pretends to strike with maniacal aggression, all the while keeping its mouth discreetly closed. These cases and others show that nature favors deception because it provides survival advantages. The tricks become increasingly sophisticated the closer we get to Homo sapiens on the evolutionary chain. Consider an incident between Mel and Paul: Mel dug furiously with her bare hands to extract the large succulent corm from the rock-hard Ethiopian ground. It was the dry season and food was scarce. Corms are edible bulbs somewhat like onions and are a staple during these long, hard months. Little Paul sat nearby and surreptitiously observed Mel's labors. Paul's mother was out of sight; she had left him to play in the grass, but he knew she would remain within earshot in case he needed her. Just as Mel managed, with a final pull, to yank her prize out of the earth, Paul let out an ear-splitting cry that shattered the peace of the savannah. His mother rushed to him. Heart pounding and adrenaline pumping, she burst upon the scene and quickly sized up the situation: Mel had obviously harassed her darling child. Shrieking, she stormed after the bewildered Mel, who dropped the corm and fled. Paul's scheme was complete. After a furtive glance to make sure nobody was looking, he scurried over to the corm, picked up his prize and began to eat. The trick worked so well that he used it several more times before anyone wised up. The actors in this real-life drama were not people. They were Chacma baboons, described in a 1987 article by primatologists Richard W. Byrne and Andrew Whiten of the University of St. Andrews in Scotland for i magazine and later recounted in Byrne's 1995 book The Thinking Ape (Oxford University Press). In 1983 Byrne and Whiten began noticing deceptive tactics among the mountain baboons in Drakensberg, South Africa. Catarrhine primates, the group that includes the Old World monkeys, apes and ourselves, are all able to tactically dupe members of their own species. The deceptiveness is not built into their appearance, as with the mirror orchid, nor is it encapsulated in rigid behavioral routines like those of the hog-nosed snake. The primates' repertoires are calculated, flexible and exquisitely sensitive to shifting social contexts. Byrne and Whiten catalogued many such observations, and these became the basis for their celebrated Machiavellian intelligence hypothesis, which states that the extraordinary explosion of intelligence in primate evolution was prompted by the need to master ever more sophisticated forms of social trickery and manipulation. Primates had to get smart to keep up with the snowballing development of social gamesmanship. The Machiavellian intelligence hypothesis suggests that social complexity propelled our ancestors to become progressively more intelligent and increasingly adept at wheeling, dealing, bluffing and conniving. That means human beings are natural-born liars. And in line with other evolutionary trends, our talent for dissembling dwarfs that of our nearest relatives by several orders of magnitude. The complex choreography of social gamesmanship remains central to our lives today. The best deceivers continue to reap advantages denied to their more honest or less competent peers. Lying helps us facilitate social interactions, manipulate others and make friends. There is even a correlation between social popularity and deceptive skill. We falsify our r???m?? to get jobs, plagiarize essays to boost grade-point averages and pull the wool over the eyes of potential sexual partners to lure them into bed. Research shows that liars are often better able to get jobs and attract members of the opposite sex into relationships. Several years later Feldman demonstrated that the adolescents who are most popular in their schools are also better at fooling their peers. Lying continues to work. Although it would be self-defeating to lie all the time (remember the fate of the boy who cried, "Wolf!"), lying often and well remains a passport to social, professional and economic success. Fooling Ourselves Ironically, the primary reasons we are so good at lying to others is that we are good at lying to ourselves. There is a strange asymmetry in how we apportion dishonesty. Although we are often ready to accuse others of deceiving us, we are astonishingly oblivious to our own duplicity. Experiences of being a victim of deception are burned indelibly into our memories, but our own prevarications slip off our tongues so easily that we often do not notice them for what they are. The strange phenomenon of self-deception has perplexed philosophers and psychologists for more than 2,000 years. On the face of it, the idea that a person can con oneself seems as nonsensical as cheating at solitaire or embezzling money from one's own bank account. But the paradoxical character of self-deception flows from the idea, formalized by French polymath Ren? escartes in the 17th century, that human minds are transparent to their owners and that introspection yields an accurate understanding of our own mental life. As natural as this perspective is to most of us, it turns out to be deeply misguided. If we hope to understand self-deception, we need to draw on a more scientifically sound conception of how the mind works. The brain comprises a number of functional systems. The system responsible for cognition--the thinking part of the brain--is somewhat distinct from the system that produces conscious experiences. The relation between the two systems can be thought of as similar to the relation between the processor and monitor of a personal computer. The work takes place in the processor; the monitor does nothing but display information the processor transfers to it. By the same token, the brain's cognitive systems do the thinking, whereas consciousness displays the information that it has received. Consciousness plays a less important role in cognition than previously expected. This general picture is supported by a great deal of experimental evidence. Some of the most remarkable and widely discussed studies were conducted several decades ago by neuroscientist Benjamin Libet, now professor emeritus at the University of California at San Diego. In one experiment, Libet placed subjects in front of a button and a rapidly moving clock and asked them to press the button whenever they wished and to note the time, as displayed on the clock, the moment they felt an impulse to press the button. Libet also attached electrodes over the motor cortex, which controls movement, in each of his subjects to monitor the electrical tension that mounts as the brain prepares to initiate an action. He found that our brains begin to prepare for action just over a third of a second before we consciously decide to act. In other words, despite appearances, it is not the conscious mind that decides to perform an action: the decision is made unconsciously. Although our consciousness likes to take the credit (so to speak), it is merely informed of unconscious decisions after the fact. This study and others like it suggest that we are systematically deluded about the role consciousness plays in our lives. Strange as it may seem, consciousness may not do any-thing except display the results of unconscious cognition. This general model of the mind, supported by various experiments beyond Libet's, gives us exactly what we need to resolve the paradox of self-deception--at least in theory. We are able to deceive ourselves by invoking the equivalent of a cognitive filter between unconscious cognition and conscious awareness. The filter preempts information before it reaches consciousness, preventing selected thoughts from proliferating along the neural pathways to awareness. Solving the Pinocchio Problem But why would we filter information? Considered from a biological perspective, this notion presents a problem. The idea that we have an evolved tendency to deprive ourselves of information sounds wildly implausible, self-defeating and biologically disadvantageous. But once again we can find a clue from Mark Twain, who bequeathed to us an amazingly insightful explanation. "When a person cannot deceive himself," he wrote, "the chances are against his being able to deceive other people." Self-deception is advantageous because it helps us lie to others more convincingly. Concealing the truth from ourselves conceals it from others. In the early 1970s biologist Robert L. Trivers, now at Rutgers University, put scientific flesh on Twain's insight. Trivers made the case that our flair for self-deception might be a solution to an adaptive problem that repeatedly faced ancestral humans when they attempted to deceive one another. Deception can be a risky business. In the tribal, hunter-gatherer bands that were presumably the standard social environment in which our hominid ancestors lived, being caught red-handed in an act of deception could result in social ostracism or banishment from the community, to become hyena bait. Because our ancestors were socially savvy, highly intelligent primates, there came a point when they became aware of these dangers and learned to be self-conscious liars. This awareness created a brand-new problem. Uncomfortable, jittery liars are bad liars. Like Pinocchio, they give themselves away by involuntary, nonverbal behaviors. A good deal of experimental evidence indicates that humans are remarkably adept at making inferences about one another's mental states on the basis of even minimal exposure to nonverbal information. As Freud once commented, "No mortal can keep a secret. If his lips are silent, he chatters with his fingertips; betrayal oozes out of him at every pore." In an effort to quell our rising anxiety, we may automatically raise the pitch of our voice, blush, break out into the proverbial cold sweat, scratch our nose or make small movements with our feet as though barely squelching an impulse to flee. Alternatively, we may attempt to rigidly control the tone of our voice and, in an effort to suppress telltale stray movements, raise suspicion by our stiff, wooden bearing. In any case, we sabotage our own efforts to deceive. Nowadays a used-car salesman can hide his shifty eyes behind dark sunglasses, but this cover was not available during the Pleistocene epoch. Some other solution was required. Natural selection appears to have cracked the Pinocchio problem by endowing us with the ability to lie to ourselves. Fooling ourselves allows us to selfishly manipulate others around us while remaining conveniently innocent of our own shady agendas. If this is right, self-deception took root in the human mind as a tool for social manipulation. As Trivers noted, biologists propose that the overriding function of self-deception is the more fluid deception of others. Self-deception helps us ensnare other people more effectively. It enables us to lie sincerely, to lie without knowing that we are lying. There is no longer any need to put on an act, to pretend that we are telling the truth. Indeed, a self-deceived person is actually telling the truth to the best of his or her knowledge, and believing one's own story makes it all the more persuasive. Although Trivers's thesis is difficult to test, it has gained wide currency as the only biologically realistic explanation of self-deception as an adaptive feature of the human mind. The view also fits very well with a good deal of work on the evolutionary roots of social behavior that has been supported empirically. Of course, self-deception is not always so absolute. We are sometimes aware that we are willing dupes in our own con game, stubbornly refusing to explicitly articulate to ourselves just what we are up to. We know that the stories we tell ourselves do not jibe with our behavior, or they fail to mesh with physical signs such as a thumping heart or sweaty palms that betray our emotional states. For example, the students described earlier, who admitted their lies when watching themselves on videotape, knew they were lying at times, and most likely they did not stop themselves because they were not disturbed by this behavior. At other times, however, we are happily unaware that we are pulling the wool over our own eyes. A biological perspective helps us understand why the cognitive gears of self-deception engage so smoothly and silently. They cleverly and imperceptibly embroil us in performances that are so skillfully crafted that the act gives every indication of complete sincerity, even to the actors themselves. From checker at panix.com Sat Jul 9 15:49:01 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:49:01 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Garbarage Land': Trash Talk Message-ID: 'Garbarage Land': Trash Talk http://www.nytimes.com/2005/07/10/books/review/10GENZLIN.html [First chapter appended.] GARBARAGE LAND On the Secret Trail of Trash. By Elizabeth Royte. 311 pp. Little, Brown & Company. $24.95. By NEIL GENZLINGER Imagine a type of obsessive-compulsive disorder that leaves you unable to throw or flush something away without tracking precisely where it goes. Not just from your indoor container to the curb or trunk line; this affliction makes you unable to put your mind at rest unless you follow your castoff into the truck, the transfer station, the landfill, the scrap-metal shredder, the treatment tank. Elizabeth Royte apparently has such a disorder, but rather than (or perhaps in addition to) letting it ruin her life, she has turned it into a likable chronicle of rubbish-realization, ''Garbarage Land: On the Secret Trail of Trash.'' Hers is a journey that everyone should take but few will. Put it in a class with how and where we get our gasoline, our food, our bluejeans and sneakers: best not to know the details, because not knowing allows you to not take responsibility. Royte, whose previous book, ''The Tapir's Morning Bath,'' followed researchers in the tropical rain forest, here follows an assortment of garbarage collectors, recyclers and sewage treaters, beginning with the men who pick up the stuff she leaves at her curb in Brooklyn on trash day. The idea is to see how much damage she is personally doing in the grand scheme of things and how she might minimize it; to get beyond the easy plateau of environmental awareness (don't eat endangered fish) and look at, well, the outflow. ''It wasn't fair, I reasoned, to feel connected to the rest of the world only on the front end, to the waving fields of grain and the sparkling mountain streams,'' she writes. ''We needed to cop to a downstream connection as well.'' The resulting journey introduces her to a colorful collection of characters: rabid composters, paranoid dump owners, starry-eyed crusaders, even some levelheaded businessmen and -women. She encounters a fair amount of colorful vernacular as well: ''Coney Island whitefish'' (used condoms in the Gowanus Canal), ''disco rice'' (maggots), ''mongo'' (''trash'' that curbside collectors deem worth saving -- televisions, microwaves, silk blouses, designer skirts). Royte's quest to see where her discards end up hits a number of human obstacles: in parts of the waste underworld, people don't want to talk to her or let her view their landfills or plants. ''Why was it so hard to look at garbarage?'' she laments at one point. ''To me, the secrecy of waste managers -- which was surely based on an aversion to accountability -- was only feeding the culture of shame that had come to surround an ordinary fact of life: throwing things away.'' Royte may have subconsciously let this stonewalling affect her: when a site does let her in for a look, she often seems to give it a free pass; her writing loses its skeptical edge and begins to sound like a report from a school field trip. Still, for the vast millions whose knowledge of waste disposal ends at the trash can and the recycling bin, any glimpse at all into this world is illuminating. Royte's lively description of a beast called the Prolerizer, a giant metal-crushing machine, makes you want to pay it a visit yourself: ''The Prolerizer has a 6,000-horsepower synchronous motor and enormous blades that can convert whole cars to fist-sized chunks of scrap in 30 to 60 seconds. . . . Cars plummeted onto the shredder's spinning rotor, which bristled with 32 bow-tie-shaped blades that weighed 300 pounds each. . . . They were 30 inches long, and though made of a steel-manganese alloy, they lasted a mere 24 hours, such was the ferocity of their labors.'' The deeper into trash and sewage Royte gets, the more discouraging the picture becomes. Landfilled trash does not biodegrade into the ''rich, moist brown humus'' of our guilt-free fantasies; it stews for centuries, generating poisonous leachate. The whole problem of junked computers and cellphones has barely reached public consciousness, even though we're already knee-deep in electronic waste. And as for recycling, some parts of the system seem to work, but the vagaries of markets and the ever-changing array of plastics and mixed-material containers make it hit-or-miss at best; it is in large part something we do for our conscience, not our planet. Some recycling is merely a delaying tactic (mixed plastic, for instance, can be reused only once, as plastic wood or some such), and some is downright harmful (with plastic again the main culprit) because of the toxic substances the process produces. Hard-core enviro-types actually oppose plastic-recycling programs, Royte says, because they foster the belief, held even among those who fancy themselves eco-conscious, that it's all right to swig that all-natural spring water out of a plastic bottle. The true ideal, in this formulation, should be ''closed-loop recycling,'' where no new materials are coming into the system and no waste is being generated. NONE of this is news to those versed in garbology and environmental advocacy, but Royte is not writing for them. She is aiming for a more general public, and a strength of ''Garbarage Land'' is that it doesn't get too preachy and is full of humor and self-deprecation. Here, for instance, is what Royte says about finding a mouse in her home composter: ''The E.P.A. has a regulation, called 40 CFR, Part 503.33, concerning 'vector attraction reduction' in soil enhancements. Obviously, I was out of compliance.'' And here is how she describes her encounter with a fertilizer made from septic sludge: ''I shook some Granulite onto my hand, just to see what holding someone else's highly processed feces felt like. It was no worse than handling raw meat, in the sense that it was so recently part of a living organism.'' She remains casual and scold-free even when she works her way around to the notion that the main thing any of us can do to reduce the waste stream is to buy less stuff. ''Garbarage Land,'' though, does have a fundamental bias, one that Royte never confronts: her jumping-off point seems to be the idea that our best, highest use as human beings is to keep our ''garbarage footprint'' to a minimum. That is a value judgment, because minimizing waste -- sorting trash, composting, cooking from scratch rather than relying on dinners in microwaveable dishes -- takes time, and time is a currency. Royte sounds smart; it's hard for the reader not to wonder what else she might have done with all those hours she spent washing out her used yogurt containers. Neil Genzlinger is a staff editor at The New York Times. --------------- First chapter of 'Garbarage Land' http://www.nytimes.com/2005/07/10/books/chapters/0710-1st-royte.html By ELIZABETH ROYTE The Dream of Zero Waste I had been touring San Francisco's garbarage infrastructure for two days now - prowling around the city's transfer station, poking into its curbside bins, and following its garbarage trucks. My hosts were Bob Besso, who worked for Norcal, the private company with which the city contracted to pick up refuse, and Robert Haley, from the Department of the Environment. Dressed in blue jeans and sneakers, Besso had the lankiness of a marathon runner. He was in his fifties, and he'd worked in recycling for decades. His and Haley's easy-going attitude, and their penchant for plain speaking, were diametrically opposed to the formal inscrutability of New York's sanitation operatives. The best part of hanging around Besso was his competitive streak: both he and Haley were walking poster children for Zero Waste. Who could throw out less? Who had more radically altered their lifestyle to leave a smaller human stain? The Zero Waste concept was a growing global phenomenon. Much of Australia had committed to achieving the goal in 2010, and resolutions had been passed in New Zealand, Toronto, twelve Asia-Pacific nations, Ireland, Scotland, the Haut-Rhin Department in the Alsace region of France, and several California counties. So far, no community had reached this nirvana, a condition perfected only by nature. For humans to achieve zero waste, went the rhetoric, would require not only maximizing recycling and composting, but also minimizing waste, reducing consumption, ending subsidies for waste, and ensuring that products were designed to be either reused, repaired, or recycled back into nature or the marketplace. Zero Waste, said Peter Montague, director of the Environmental Research Foundation, had the potential to "motivate people to change their life styles, demand new products, and insist that corporations and governments behave in new ways." I didn't take Zero Waste literally. I considered it a guiding principle, a rallying cry for green idealists. I understood its intensive recycling component, but what about goods that simply could not be recycled? Over lunch in a Vietnamese restaurant, I learned that Zero Waste wasn't just rhetoric to Haley. "I don't have a trash can at work," he said. On his desk sat a grapefruit sized ball of used staples - ferrous scrap that he couldn't bear to throw out. "If I'm going to be a leader in Zero Waste I have to live the life," he said. I asked what affect this had on domestic harmony. "My partner is 99.9 percent with me," he said, nodding enthusiastically. "What's the one-tenth-of-a-percent problem?" "She draws the line at twist ties." "Well you know you could strip the paper from the wires and -" I interrupted myself. Haley already knew how to recycle a twist-tie. At home, he was diverting 95 percent of his waste from the landfill. The 5 percent he threw out was "manufactured goods" - recently some beyond-repair leather shoes. Worn out sneakers, of course, were mailed to Nike, which shreds rubber and foam into flooring for gyms. The company accepts non-Nike footwear too, and is also trying to tan leather without questionable toxins and developing shoes made of a new rubber compound that doubled as a biological nutrient - something that could be harmlessly returned to nature. This would be quite an improvement, since according to designer William McDonough conventional rubber soles are stabilized with lead that degrades into the atmosphere and soil as the shoe is worn. Rain sluices this lead dust into sewers, and thence into sludge bound for agricultural fields. According to the National Park Service, which has more than a passing interest in manmade stuff that lies around on the ground, leather shoes abandoned in the backcountry last up to fifty years (if they aren't eaten, one presumes), and rubber boot soles go another thirty. McDonough's 206-page book, Cradle to Cradle, was printed on "paper" made of plastic resins and inorganic fillers. The pages are smooth, and waterproof, and the whole thing is theoretically recyclable into other "paper" products. The book weighs one pound, four ounces. A book of comparable length printed on paper made from trees weighs an entire pound less. "What do you think of that?" I asked Haley. He nearly spit out his mouthful of curried vegetables. "McDonough's book will be landfilled! I'd rather cut down a tree!" To Haley and Bob Besso, landfilling was the ultimate evidence of failure. Avoiding the hole in the ground-which in San Francisco's case was owned by Waste Management, Norcal's archenemy-had become a game to them, albeit a game with serious consequences. Haley didn't use his paper napkin at the restaurant, and he scraped the last bit of curry from his plate. But we all knew there was waste behind his meal - in the kitchen, on the farm, in the factory that made the boxes in which his bok choy had been carted to San Francisco. I wondered if Zero Waste really meant anything, considering the limits of our recycling capability and our reluctance to alter our lifestyles. It was as dreamy an idea as cars that ran on water. And just as appealing to industry, too. "Zero Waste is a sexy way to talk about garbarage," Haley said. "It gets people excited." I considered that for a moment. Could we solve our garbarage problems by making garbarage sexy? Seeing how little I could throw out was fun for me, if not exactly sexy. I'd gotten caught up in the game, back home with my kitchen scale and Lucy's blue toboggan. I recorded my weights in a little book, I crunched my numbers, and I measured my success by how many days it took to fill a plastic grocery sack. In the months to come, I'd find people who neither lived nor worked in the Bay Area who were having fun (if not sexy fun) with garbarage reduction. Shaun Stenshol, president of Maui Recycling Service, had toyed with the idea of decreeing a Plastic Free Month, but ultimately deemed such a test too easy. Instead, he issued a Zero Waste Challenge. Over the course of four weeks, Maui residents and biodiesel users Bob and Camille Armantrout produced eighty-six pounds of waste, of which all but four (mostly dairy containers and Styrofoam from a new scanner) was recyclable. Alarmed to note that 35 percent of their weight was beer bottles, which they recycled, the Armantrouts vowed to improve. Bob ordered beer-making equipment to help reduce the amount of glass they generated, and Camille promised to start making her own yogurt. Despite these efforts, the Armantrouts didn't win the Challenge. The winner of the contest, as so often happens, was its inventor. All on his own, Stenshol had produced an even one hundred pounds of waste, of which he recycled ninety-nine pounds. Fresh Kills Landfill Paddling One of my favorite expeditions while researching Garbarage Land, though this part of the story didn't make it into the book, was kayaking around the Fresh Kills landfill, on Staten Island, with Carl Alderson, a coastal restoration specialist who works for the National Oceanic and Atmospheric Administration. After a delightful paddle around the dump, Alderson and I narrowly escaped arrest by a sanitation cop only to end up in very shallow water with the tide going out. ... For several yards we poled and pried, but soon the kayak was stuck for good. Our car was parked a half mile up Main Creek, but the creek had turned into a mere trickle of brown water. Alderson seemed strangely optimistic. He checked the time on his cell phone and started muttering to himself about the tide. "Okay," he said. "We can wait four hours till it turns, or try again to get upstream, or we can roll over the mud to the edge." The edge, a field of waving Spartina patens, was about 60 feet away. "How deep is the mud?" "Over your waist." I thought about that. "Have you done it before?" "Oh yeah. You've just got to keep from panicking. It's like quicksand." Alderson was standing in the stern, wind-milling his arms to generate warmth. My feet were ice blocks. "Is that the wind?" he said, his voice rising, hair fluttering heroically. "It's pushing the water back in!" It was the wind, but it wasn't delivering any more water. The afternoon was just getting colder and more dismal. At least the snow had stopped. Opening his cell phone, Alderson dialed the office of the William T. Davis Wildlife Refuge, where we'd picked up the kayak. "Hey, Linda. Could you do me a favor and check today's tide chart?" He paused. "Uh-huh, you sure of that? Okay, thanks." With a look of resignation, Alderson snapped the phone shut. He had another plan. "There's a bunch of pallets in the refuge greenhouse, maybe we can get Sam and Nate to bring them down and make a path over the mud." It seemed a little hare-brained to me - we'd need about fifty pallets - but I liked the idea of involving others. Alderson slapped the mudflat with his paddle. It quaked. The mud didn't look particularly ominous in the fading light, but I knew it was roiling with life, with the stuff that feeds the marsh's birds, fish, and mammals. There were marine worms down there, some of them voracious predators more than five inches long, and lugworms and clamworms that ate algae or detritus extracted from the sand. These organisms were tough, able to withstand a half-day of submersion, a half-day of drought, baths of incoming salt water and rinses of sewage- and leachate-tainted fresh. Alderson advised his assistants to avoid touching the mud or water. A woman planting cord grass for him once fell in up to her baseball cap and emerged with a mysterious skin condition he called "full-body pink eye." "How deep did you say the mud is?" I asked Alderson for the second time in twenty minutes. "You can't tell," he said. "It seems bottomless. The silt and organic layering have been going on for millennia. I've watched a few people go down in chest waders. It's scary to watch someone sink deeper in muck and further in panic. I've dragged a few frightened folks out in my day." That shut me up. As we waited for Sam and Nate, I thought about how this landscape had changed. In the Paleo-Indian period, between 10,000 BC and 8,000 BC, the western side of Staten Island was a much higher and dryer place. We know that Lenape Indians occupied the area because they left their tools and high middens of clam and oyster shells behind. Sometime between 8,000 and 1,000 BC, rising sea levels created vast swamps on the western side of the island, at which time Lenape settlements became larger and more permanent. Eventually, Europeans would grow salt hay in these marshes, and it would become Staten Island's largest cash crop. Just two hundred years ago, before the hydrology of the swamps had been altered, both Richmond and Main Creeks were navigable for more than a mile. Today, the island's biggest export was garbaragey. With a low whine, a golf cart kitted out with a forklift emerged from the dun-colored reeds. While Sam and Nate - vague figures in dark clothes-struggled with the pallets, Alderson lounged like a beer drinker in a lawn chair and offered encouraging suggestions. "Not too far apart, boys." They grunted. "So did you know we all passed the navigation course?" "Yeah, Carl," said Sam, with no affect. "But when are they teaching the course about tides?" Alderson laughed, his eyes crinkling. "I guess that's next," he said. Sam dropped the pallets onto the mud, then went back to the greenhouse for more. When the makeshift dock stretched twenty feet, Nate, a burly young man in chest waders, went to the end and strapped on a pair of mud shoes. These resembled snowshoes but were made of webbed rubber that collapses when the foot is lifted and spreads out, like a heron's foot, when it's plopped down. With his thick beard and rubber clothing, Nate looked like a vulcanized hero from the underworld. He trudged toward us in a hulking manner. In his hand was a length of frayed rope. If he had a plan, no one knew it. I watched with growing fascination as he drew nearer-slop, slop, slop. Alderson sat still. I sat still. Nate reached the boat, still silent. Now he tied his line to our bow cleat, turned around, and heaved the boat forward and up the sloping mudflat. "Wow," I said. Alderson nodded at me and smiled. Barehanded and coatless, Nate hauled on the line again and again. "Shouldn't we get out?" I asked Alderson. "Nope," he answered. Apparently, there was just enough water in the mud to lubricate our passage. It dawned on me that Alderson and the boys had been through this routine before, in exactly these positions. . . . From checker at panix.com Sat Jul 9 15:49:08 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:49:08 -0400 (EDT) Subject: [Paleopsych] Space.com: Teleportation: Express Lane Space Travel Message-ID: Teleportation: Express Lane Space Travel http://space.com/businesstechnology/050708_teleportation.html Leonard David Senior Space Writer 5.7.8 Think Star Trek: You are here. You want to go there. It's just a matter of teleportation. Thanks to lab experiments, there is growth in the number of "beam me up" believers, but there is an equal amount of disbelief, too. Over the last few years, however, researchers have successfully teleported beams of light across a laboratory bench. Also, the quantum state of a trapped calcium ion to another calcium ion has been teleported in a controlled way. These and other experiments all make for heady and heavy reading in scientific journals. The reports would have surely found a spot on Einstein's night table. For the most part, it's an exotic amalgam of things like quantum this and quantum that, wave function, qubits and polarization, as well as uncertainty principle, excited states and entanglement. Seemingly, milking all this highbrow physics to flesh out point-to-point human teleportation is a long, long way off. Well, maybe...maybe not. A trillion trillion atoms In his new book, Teleportation - The Impossible Leap, published by John Wiley & Sons, Inc., writer David Darling contends that ""One way or another, teleportation is going to play a major role in all our futures. It will be a fundamental process at the heart of quantum computers, which will themselves radically change the world." Darling suggests that some form of classical teleportation and replication for inanimate objects also seems inevitable. But whether humans can make the leap, well, that remains to be seen. Teleporting a person would require a machine that isolates, appraises, and keeps track of over a trillion trillion atoms that constitute the human body, then sends that data to another locale for reassembly--and hopefully without mussing up your physical and mental makeup. "One thing is certain: if that impossible leap turns out to be merely difficult--a question of simply overcoming technical challenges--it will someday be accomplished," Darling predicts. In this regard, Darling writes that the quantum computer "is the joker in the deck, the factor that changes the rules of what is and isn't possible." Just last month, in fact, scientists at Hewlett Packard announced that they've hammered out a new tactic for a creating a quantum computer-using switches of light beams rather than today's run of the mill, transistor-laden devices. What's in the offing is hardware capable of making calculations billions of times faster than any silicon-based computer. Given quantum computers and the networking of these devices, Darling senses the day may not be far off for routine teleportation of individual atoms and molecules. That would lead to teleportation of macromolecules and microbeswith, perhaps, human teleportation to follow. Space teleportation What could teleportation do for future space endeavors? "We can see the first glimmerings of teleportation in space exploration today," said Darling, responding to questions sent via e-mail by SPACE.com to his home office near Dundee, Scotland. "Strictly speaking, teleportation is about getting from A to B without passing through the points between A and B. In other words, something dematerializes in one place, then simply rematerializes somewhere else," Darling said. Darling pointed out that the Spirit and Opportunity rovers had to get to Mars by conventional means. However, their mission and actions are controlled by commands sent from Earth. "So by beaming up instructions, we effectively complete the configuration of the spacecraft. Also, the camera eyes and other equipment of the rovers serve as vicarious extensions of our own senses. So you might say the effect is as if we had personally teleported to the Martian surface," Darling said. Spooky action at a distance In the future it might be possible to assemble spacecraft "on-the-spot" using local materials. "That would be a further step along the road to true teleportation," Darling added. To take this idea to its logical endpoint, Darling continued, that's when nanotechnology enters the scene. When nanotechnology is mature, an automated assembly unit could be sent to a destination. On arrival, it would build the required robot explorer from the molecular level up. "Bona fide quantum teleportation, as applied to space travel, would mean sending a supply of entangled particles to the target world then use what Einstein called 'spooky action at a distance' to make these particles assume the exact state of another collection of entangled particles back on Earth," Darling speculated. Doing so opens the prospect for genuinely teleporting a robot vehicle--or even an entire human crew--across interplanetary or, in the long run, across interstellar distances, Darling said. "Certainly, if it becomes possible to teleport humans," Darling said, "you can envisage people hopping to the Moon or to other parts of the solar system, as quickly and as easily as we move data around the Internet today." UFO connection? If indeed we are to become a space teleporting civilization, what about other advanced civilizations circling distant stars? Perhaps they have already mastered mass transportation via teleportation? One might even be drawn to consider that mode of travel in connection with purported UFO visitation of Earth. "Any strange comings and goings are candidates for teleportation, although you would obviously have to eliminate all mundane explanations first," Darling responded. "According to reports, some UFOs do appear and disappear quite abruptly, which would fit in with the basic idea of teleportation," he said. Darling said that interstellar teleportation would be one way to circumvent the light barrier, "although, as we understand the process now, you would need to make a sub-light trip first to set up the teleportation receiver and assembler at the destination." Quantum teleportation, Darling pointed out is the kind we can do at the subatomic level in the lab today. And that requires equipment at both ends to be able to work. "Extraterrestrial intelligence that is thousands or millions of years ahead of us will certainly be teleportation experts," Darling advised, "if the technology can be implemented at the macroscopic biological level." What possible outcome, then, from ET successfully tinkering with teleportation? "We might expect advanced aliens to be occasionally beaming in to check on our progress as a species," Darling concluded. From checker at panix.com Sat Jul 9 15:50:20 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:50:20 -0400 (EDT) Subject: [Paleopsych] The Australian: Christopher Pearson: No future in eternity Message-ID: Christopher Pearson: No future in eternity http://www.theaustralian.news.com.au/printpage/0,5942,15863203,00.html 5.7.9 I SUPPOSE most people have sometime or other toyed with the fantasy of eternal youth and health. Damien Broderick, a science contributor with The Australian, has turned it into a magnificent obsession. In his futurological books The Spike: Accelerating into the Unimaginable Future and The Last Mortal Generation: How Science Will Alter Our Lives in the 21st Century, he has seriously canvassed the chances that immortality is at hand. In last week's The Weekend Australian Review he was at it again, reviewing a new work by Ray Kurzweil and Terry Grossman entitled Fantastic Voyage: Live Long Enough to Live Forever. Surveying the latest evidence, Broderick is up-beat. "It seems likely that powerful research programs will let us first slow, then halt, the leading causes of death - heart disease, cancer, stroke, infections - then, perhaps, reverse ageing, that slow terrible corrosion of our youthful flesh and lively minds." How can this be? "Knowledge is doubling and deepening at a prodigious rate, and even that rate is accelerating ... some of those alive now may thrive indefinitely, kept youthful by the same recuperative processes that build brand-new babies from ageing sperm and ova." Fine and dandy for the fortunate young, you may be thinking, but what about the rest of us? Are we the last to feed the worms or crematorial fires? "Perhaps not, if a kind of maintenance engineering can be applied to our ailing bodies. The remedy may be complicated: genomic profiling, pills, supplements, stringent diet, more exercise than we care for ... In the slightly longer term, our bodies may be infused with swarms of machines not much larger than viruses, nanobots designed to scavenge wastes and repair tissue damage at the scale of cells." Broderick envisages a future in which "every human will have the choice of staying healthily young indefinitely or of stepping aside, if they choose, to make room for a new life, assuming, of course that we linger on this planet and that we remain strictly human". > From a futurologist's perspective, inter-planetary emigration is probably neither here nor there. However, an attenuated relationship with the strictly human does raise philosophical problems. Broderick is a techno-triumphalist; tomorrow belongs to him. "No doubt the arguments will continue for generations until all those opposed to endless life have died." If his confidence is warranted, it's surprising that there hasn't been more of a fuss made about such startling developments. Admittedly you can go to the Immortality Institute's website or log on to the World Transhumanist Association, but so far not a peep out of the federal Government. Are they just trying, yet again, "to underpromise and deliver in spades" as John Howard is wont to say? Usually voluble sources were tight-lipped, so I decided to try thinking like a futurologist. Supposing immortality were technically feasible, how would people avail themselves of the opportunity? First World economics suggests that they'd have to pay for it and that, like any scarce resource, it would be rationed by price. Initially the capital cost would be astronomical and keep eternal youth as the preserve of the very rich and, no doubt, their pets. If electoral pressure -- and occasional riots -- obliged the G8 governments to pour endless public funding into nanobot research, cryogenics and cloning, the unit cost would fall. But even if immortality became a national health service item, there would still be tricky distributional issues. For example, someone would have to make decisions about who was least likely to benefit from treatment and explain why they'd, as it were, missed the bus. Then again, think of the recriminations from the Third World, unless the elixir of life were made freely available and as UN cant puts it: "Within a socially acceptable time frame." Or forget about the recriminations and think instead about a rogue state or a terrorist organisation getting a nuclear weapon. How easy to hold the life-enhanced (but by no means indestructible) populations of the developed world to ransom: the slogan would be immortality for all or for none. Even if enlightened self-interest triumphed, in an orderly transition to a post-mortal world, there would still be pesky economic issues to sort out. What, for example, happens to countries where huge amounts of capital are diverted from other kinds of productive investment into a bottomless pit of human resource development? In a society where those entering into immortality spend most of their time at the gym or taking (on Broderick's reckoning) 250 pills a day, who does the work and prepares the food? After time and tide have borne away the last mortal cohort, there'd be an end to the transfers of inherited capital that previously helped keep the wheels of industry and speculative enterprise turning. For fear of running short, business and investors would become highly risk-averse. While some optimists might reckon that there's always time to make more money, most of us would be playing it safe and hoarding or saving up for planetary migration and to fund the next generation of life-enhancers. Talking of the next generation, reproduction as we have known it would lose any sense of urgency. The notion of immortality through progeny and the survival of one's genes would fade away. Indeed, given the amount of time that would have to be devoted to personal regeneration, it would be surprising if people had any left over to devote to parenting. Besides, the zero population growth lobby and the greens would doubtless be arguing that there's no more room, at least on this over-crowded continent. Presumably, in the transition period, adopting Third World babies would be permitted. It might also be possible - borrowing the model of carbon emissions trading - to buy the reproductive entitlements of adults who'd been talked into renouncing their access to immortality. Forward-thinking regimes such as China's might well set up a market in the reproductive rights of long-term prisoners and those condemned to death, to cover administrative costs and so forth and to complement the existing trade in body parts. Futurologists seldom take much notice of scarcity economics and they're apt to assume technological progress means abundance for all. It hasn't so far, of course, and -- if scarce resources meant rationing the right to reproduce -- we would all be in terrible trouble. For it is the experience of parenthood that most effectively teaches us, men especially, the lessons of selflessness. That hard-wired capacity for unconditional love of helpless offspring turns self-preoccupied adolescents into adults almost overnight. Without parenthood, the race would become spoiled and go to rack and ruin. It is, I suppose, just conceivable that Broderick may be right about the theoretical possibility of indefinitely prolonged life. However, human nature is less malleable than human physiology and ill-adapted to immortality's challenges. I also have my doubts about whether, if offered the everlasting option, all that many of us would take it. After all, well-adjusted people tend to develop a serene acceptance of finitude. Then again, the sense of an ending is all that makes some lives, especially very long ones, bearable in the meantime. Robert Louis Stevenson's popular Requiem captures the sense of a welcome end: Under the wide and starry sky Dig the grave and let me lie. Glad did I live and gladly die, And I laid me down with a will. This be the verse you grave for me: Here he lies where he longed to be, Home is the sailor, home from sea, And the hunter home from the hill. From checker at panix.com Sat Jul 9 15:51:28 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:51:28 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Hot Property': Freebooters of Industry Message-ID: 'Hot Property': Freebooters of Industry http://www.nytimes.com/2005/07/10/books/review/10LINDL.html [First chapter appended.] HOT PROPERTY The Stealing of Ideas in an Age of Globalization. By Pat Choate. 352 pp. Alfred A. Knopf. $26.95. By MICHAEL LIND IN recent years a series of reports have provided evidence about the erosion of America's scientific and industrial base. But ''strikingly,'' as Pat Choate observes in ''Hot Property: The Stealing of Ideas in an Age of Globalization,'' ''the massive theft of U.S.-owned intellectual properties as a contributing cause to America's technological decline has been almost totally overlooked in these reports.'' In this timely and important book, Choate sounds the alarm about the threat posed by such piracy. Choate, who is best known as Ross Perot's vice-presidential candidate in 1996, has sounded alarms before. When he published ''Agents of Influence,'' his study of Washington lobbyists funded by the Japanese government and Japanese corporations, he was denounced as a Japan-basher. Today the assertion that Japan has practiced result-oriented mercantilism rather than free trade for decades is rarely disputed. When Choate, like Perot, warned that as a result of the North American Free Trade Agreement, American corporations would move their factories to Mexico to take advantage of low labor costs, he was portrayed as a Mexico-basher. Bill Clinton and Al Gore argued that Mexico would be a market for American manufactured goods. Instead, according to the economist Charles McMillion: ''The large U.S. net export losses to Mexico since Nafta are concentrated in autos, machinery, electronics, apparel and furniture. U.S. net export gains are largely in agribusiness and bulk commodities such as cereals and organic chemicals.'' Score two out of two for Choate. In his latest campaign, Choate is likely to find allies in the business community -- and opponents among some champions of developing nations, as well as some libertarians who argue for weakening or eliminating intellectual property rights. Choate quotes the definition of ''copyright industries'' used by the International Intellectual Property Alliance: ''music and book publishing, radio and television broadcasting, cable television, newspapers and periodicals, records and tapes, motion pictures, theatrical productions, advertising, computer software and data processing'' -- to which others, like pharmaceuticals, can be added. According to the intellectual property alliance, ''worldwide digital piracy costs America's copyright industry $20 billion to $22 billion annually, and that approximation excludes illegal Internet downloads.'' Patents, copyrights, trademarks and other intellectual property rights are useless if governments do not enforce them against thieves. However, officials in many developing nations are more concerned about promoting national economic growth by disseminating know-how than with protecting the rights of foreigners. Choate tells the story of Haima, a Chinese corporation that is the largest woven-carpet manufacturer in Asia. After Milliken & Company, the largest private textile company in the United States, lost a Chinese contract to Haima, ''Milliken personnel obtained a copy of Haima's 1999 carpet catalog. It featured 16 copyrighted Milliken designs.'' Although an American court ordered Haima to pay Milliken more than $4 million, the American company has been unable to collect. Choate acknowledges that as an industrializing nation in the 19th century the United States engaged in many of the practices that it condemns today, including industrial espionage. He recounts the tales of Samuel Slater, who brought secret British spinning machine technology to the United States, and Francis Cabot Lowell, a Boston patrician who used his photographic memory to steal the trade secrets of British textile manufacturers. ''The most important feature of the Patent Act of 1793,'' Choate writes, ''was what it did not provide: protections for foreign inventors. Only American citizens were eligible for a U.S. patent. Thus, any American could bring a foreign innovation to the United States and commercialize the idea, all with total legal immunity.'' In later generations, Germany and Japan similarly manipulated intellectual property rights. In the 1990's, concern about the theft of intellectual property inspired the United States to promote the Trade Related Intellectual Property System. ''Ironically,'' Choate observes, ''after leading the long, historic fight to put these global protections into place, Washington is now strangely unwilling to use them.'' He notes that ''since June 2000, the U.S. has not filed a single intellectual property case at the World Trade Organization.'' Choate contrasts the attention that the Clinton administration paid to these issues with the lack of interest the Bush administration has displayed. The difference may be one of constituencies. ''Copyright industries'' like the movie and technology sectors provide much of the financial support for the Democratic Party, and they are far more threatened by intellectual property violations than are the commodity producers of the Republican red states. This interpretation finds support in Choate's data on the World Trade Organization: ''Overall, the Bush administration filed only 12 cases with the W.T.O. during its first four years in office. Six of those dealt with foreign impediments to U.S. agricultural exports -- beef, rice, genetically enhanced foods, corn, wheat, cheeses, dairy products and apples.'' Choate says that while industrializing countries may benefit from piracy, the world as a whole loses. ''Piracy and counterfeiting impede innovation: thieves do not invest in research, design, production, development or advertising. . . . The result is fewer new medicines, fewer advances in science, fewer new products, fewer new music CD's, fewer new movies, less new software and higher prices for whatever is created.'' Everyone is harmed, either directly or indirectly, ''when thieves steal from Microsoft and Disney.'' And, he concludes, ''What is missing is the will of U.S. political leaders to confront those who are stealing U.S.-owned intellectual properties and with them the future of the American people.'' Michael Lind is the Whitehead senior fellow at the New America Foundation in Washington. ---------------- First chapter of 'Hot Property' http://www.nytimes.com/2005/07/10/books/chapters/0710-1st-choate.html By PAT CHOATE The Golden Covenant Manhattan's 30,000 citizens were awakened on the morning of April 30, 1789, by the roar of cannons. But this day the gunfire was not for war, but to celebrate George Washington's inauguration as the first president of the United States. Soon after 10:30 a.m., the president-elect, led by a joint congressional committee, appeared in Lower Manhattan at Federal Hall (formerly City Hall), which was serving as the new nation's temporary capitol. Washington was dressed in a brown suit of homespun broadcloth-a gift from the Hartford Woolen Manufactory, a small mill in Connecticut. Before the Revolutionary War, this wealthy Virginia planter had had his suits made of silk and velvet by London's finest tailors. But now he wore a simple American-made suit-his personal gesture of support for domestic manufacturing. Yet the new president's appearance was far from drab. His suit was adorned with brass buttons embossed with the new national symbol, the bald eagle, and his cuffs had a row of studs, each marked with thirteen stars, symbolizing the founding states. Washington's overture was widely noted in the nation's newspapers, which reported that everything he wore that day had been made in the United States. George Washington's support of domestic manufacturing was not some passing political sop to a special interest group. Rather, his position had been forged by eight hard years of Revolutionary War experiences and huge debts to European suppliers and financiers. Several times, Washington's army almost lost the war because ammunition was in short supply. In the first year, soldiers often went into battle with no more than nine cartridges each. At the battle of Bunker Hill, the Americans quickly ran out of ammunition, finishing the fight by clubbing the English troops with the butt ends of their muskets. Thousands of Washington's troops spent the winter of 1777-78 at Valley Forge, Pennsylvania, with no shoes for their feet, few clothes, and not enough blankets to keep out the cold. In a letter dated December 23, 1777, a desperate Washington wrote to the Continental Congress that he had "no less than two thousand eight hundred and ninety-nine men in camp unfit for duty, because they are barefoot and otherwise naked." From the beginning of the war, Washington's army lacked guns, gun- powder, rope, sails, shoes, and clothes, among many other military necessities, largely because Great Britain had long prohibited most manufacturing in its American colonies. Instead, the mother country restricted colonial production to timber, furs, minerals, and agricultural goods. Thus, the U.S. economy was overwhelmingly agricultural when war came, with more than 94 percent of the population living on farms. After independence was declared, the new nation had to buy its war mat?riel from the Dutch, French, and other European suppliers, and do that largely on credit. Any nation that sold goods to the American colonials risked a conflict with Britain, then the world's foremost military power. And when British leaders said they would hang any of the revolutionary leaders they captured, the threat was real, making government service a bit riskier than it is today. In late 1776, a distressed Continental Congress sent Benjamin Franklin, the best-known American, to Paris to seek French support and goods. His list of purchases in 1777 illustrates just how little manufacturing capacity America had. He bought 80,000 shirts, 80,000 blankets, 100 tons of powder, 100 tons of saltpeter, 8 ships of the line, muskets, and 100 fieldpieces. Then Franklin arranged for smugglers to carry the goods across the Atlantic Ocean in a 4,000-mile, three-month journey to St. Eustatius, a Dutch island in the Caribbean, where smugglers received the supplies and slipped them through the British naval blockade and into the colonies, a 1,400-mile trip that consumed another five to six weeks. For eight years, Washington and the Continental Congress struggled to obtain enough materials for their troops. By war's end, the need for U.S. military and industrial self-sufficiency was seared into their consciousness. For Washington, wearing a plain brown suit of American-made broadcloth on Inauguration Day was a small sacrifice that sent a large message to his fellow citizens. Before taking office, Washington informed Thomas Jefferson, the man who would soon be secretary of state, that the development of manufacturing and inland navigation would be his greatest concern as president. As the historian Doron S. Ben-Atar reveals in his 2004 book Trade Secrets, Washington was a strong proponent of importing European technicians, and in his first State of the Union message, he also encouraged the introduction of foreign technology. In his many speeches, Washington "voiced the widespread expectation that the federal government would devote its energies to industrial development." After assuming the presidency, Washington and the Congress moved quickly to reduce America's dependence on other nations for its national security needs. Action was imperative, because as the Revolution's leaders had seen, today's allies often become tomorrow's enemies. In that quest for self-sufficiency, Washington turned to Alexander Hamilton, a loyal, brave, and brilliant aide who had led a bayonet attack at Yorktown. Far more foresighted than most of his contemporaries, Hamilton envisioned an economic and political structure for a post-Revolution America. When Washington appointed him secretary of the Treasury, Hamilton was ready with recommendations. In January 1790, he presented Washington and Congress a white paper titled "Report on Public Credit," which outlined the actions necessary to make the new nation appear creditworthy to foreign investors, including a controversial recommendation to pay off all the state debts incurred during the Revolution. At almost the same time as it received Hamilton's credit report, Congress ordered him to prepare a report on manufactures that would "render the United States, independent on foreign nations, for military and other essential supplies." On December 5, 1791, Hamilton submitted to Congress his "Report on Manufactures," which outlined why and how the United States could achieve economic equality with Europe and an industrial self-sufficiency. Building a strong U.S. industrial base, he wrote, " 'tis the next great work to be accomplished." To become a true equal of Europe, Hamilton proposed that the United States follow Europe's lead and erect a tariff wall behind which the American market could develop and American manufactures could prosper. This, he argued, was the only way to confront Europe's manufacturing subsidies, its high tariffs on U.S. imports, and its repeated pattern of dumping goods at artificially low prices in the U.S. market to kill America's infant industries. Without his proposed actions, American manufacturers could never compete fairly, either in Europe or in their own domestic market, Hamilton reasoned. Behind this tariff wall, the government could provide the protections of a strong patent system, giving inventors and investors a government-guaranteed right to the exclusive use of their innovations for a fixed period. To accelerate national development, Hamilton also wanted to encourage the migration of skilled foreign workers to America. They would bring badly needed abilities and state-of-the-art technology to the new nation. In his report, Hamilton commented favorably on the actions of Samuel Slater, a twenty-one-year-old mechanic who in 1789 had slipped out of England with one of the British textile industry's crown jewels: the secret of how to build and operate a machine that could spin cotton and wool into thread. Hamilton's message to potential immigrants was loud and clear: bring your nation's industrial secrets to America, gain citizenship, get a patent, be honored, and become wealthy. One irony of the American Revolution is that most of its leaders were Anglophiles. In the French and Indian Wars, Washington sought a regular commission in the British army but was rejected because of his colonial status. Franklin was the delight of London society until he defended the colonists' rights. And in the years leading up to the Declaration of Independence, Jefferson, Madison, and Monroe, among other revolutionary leaders, thought of themselves as loyal British citizens and sought a course that would allow the colonies to remain a part of Britain. Even after the Revolutionary War, with all the bitterness it generated, many English traditions and assumptions remained embedded in the hearts and minds of Americans. One of those fundamental notions was that patent and copyright protections encouraged innovation and national development. The appeal of those ideas is understandable, in part because they had an extended history. By the late 1700s, Britain had the longest continuous patent tradition in the world, one whose origins traced back to 1449, when Henry VI issued John of Utynam a letter patent (an open letter with the king's seal) granting the Flemish glassmaker a twenty-year monopoly on the process that produced the windows at Eton College. In exchange, the foreign glassmaker was required to teach English artisans his process. As former subjects of the English king, the newly minted Americans were familiar with the doctrine of the public interest, as incorporated into Britain's Statute of Monopolies (1624). It gave a fourteen-year monopoly to "the true and first inventor" of new manufactures-a law in effect for more than 150 years before the American Revolution. Likewise, the colonists were familiar with Britain's copyright law, the Statute of Anne, which was enacted in 1710. Under that act, the monopoly power of publishers was weakened and the rights of authors of new works were strengthened with copyright protection for fourteen years, with the possibility of a fourteen-year renewal. And while the Statute of Monopolies did not apply in the colonies, the various colonial governments enacted patent laws that imitated it. After independence and before the ratification of the U.S. Constitution, twelve of the thirteen colonies enacted copyright laws based on the Statute of Anne. For the leaders of the new nation, the basic concept was simple: patents and copyrights encouraged inventors and authors to produce more new and useful creations. These innovations could help the U.S. progress. And as the details of these creations became public, the general knowledge of the nation would be expanded. The process as a whole could only make life better for most Americans and would help the new nation grow richer and stronger faster. The concept was so fundamental that the Founding Fathers integrated it into the Constitution, believing that the public good fully coincided with the claims of individual authors and inventors. When the "authors and inventors clause" (sometimes called the "progress clause"), drafted by James Madison and Charles Pinckney, was presented for consideration at the Constitutional Convention on September 5, 1787, there was no debate and not a single dissenting vote. Creating a working system of patents and copyrights was a top priority for George Washington. In his first State of the Union message (January 8, 1790), he recommended that Congress enact legislation to encourage the introduction of new inventions from abroad and foster their creation domestically. Congress acted quickly, and the president signed the first Patent Act into law on April 10, 1790, and the first Copyright Act less than two months later, on May 31, 1790. The Patent Act made the issuance of a patent a matter of the highest importance-a function administered by the president and three senior cabinet officers. There was no patent office. Rather, a patent petition was submitted directly to Secretary of State Thomas Jefferson. Then Secretary of War Henry Knox and Attorney General Edmund Randolph reviewed it. These three constituted a patent board. They established strict rules for obtaining a patent, and on the last Saturday of every month, they met to review applications. If two of the three approved, a patent letter was prepared for the personal signature of President Washington, who then sent it back to Jefferson who, as secretary of state, also signed the letter and then had the Great Seal of the United States affixed. The patentee then had a fourteen-year period during which to exclude others from using the creation. The total cost was roughly $5, which went not to the Treasury but to the clerks who copied and processed the paperwork. Those early patent grants are greatly valued today for their historic signatures. Jefferson was surprised by the number of innovations inspired by the prospect of a patent. Soon after passage of the 1790 act, more applications and models of inventions were appearing at his office than he and his two colleagues could handle. As often happens with something new in government, the first patent act was a false start, and Jefferson knew it. He urged Congress to alter the "whole train of business and put it on a more easy footing." To that end, he drafted legislation and sent it to his congressional allies in February 1791. Jefferson's escape from the patent board, however, was delayed for more than a year as Congress repeatedly postponed any vote on his or any other patent reform proposal. Meanwhile, the board was obligated to carry out its duties. In 1792 Jefferson wrote his old friend Congressman Hugh Williamson of North Carolina that of all the duties ever imposed on him, reviewing patent applications consumed his time the most and gave him the most "poignant mortification." By early 1793, only 57 patents had been issued and 114 applications were pending, while dozens of others had been denied. Inventors hated the system; it delayed consideration of their applications and imposed such scrutiny that for every one approved, another was denied. The board abhorred the process because it had neither the time nor the resources to meet its obligations. Eventually, Congress enacted the Patent Act of 1793, without most of Jefferson's recommendations. What emerged was legislation that sharply changed the patent system from one with strict rules to one with virtually no rules. Congress allowed inventors to register their inventions with the State Department without an examination. The courts were assigned the responsibility of sorting out which patents were legitimate and which were not. Not surprisingly, with such lax rules the number of applications and issuances rose. Between 1793 and 1836, when the patent laws were next altered, more than 9,500 patents were issued. In such a lenient environment, piracy flourished. Many applicants went to the State Department, where models of inventions were found, bought a copy of a patent, duplicated it, and then filed an application for the same invention. Often, the same idea was patented multiple times. The owners of the later grants would enter business, telling others they had the exclusive use of an innovation, or take the official documents to unsuspecting licensees and investors for money. In other situations, an inventor would create an innovation, unaware of the advances of others, secure a patent, and sincerely believe that the conception was his alone. The result was a patent holder's nightmare and a lawyer's dream. The courts were soon clogged with lawsuits. In the end, the most important feature of the Patent Act of 1793 was what it did not provide: protections for foreign inventors. Only American citizens were eligible for a U.S. patent. Thus, any American could bring a foreign innovation to the United States and commercialize the idea, all with total legal immunity. . . . From checker at panix.com Sat Jul 9 15:51:33 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:51:33 -0400 (EDT) Subject: [Paleopsych] AP: Gene hunters flock to Amish country Message-ID: Gene hunters flock to Amish country http://www.globetechnology.com/servlet/story/RTGAM.20050627.gtamishjun27/BNPrint/Technology/target=?mainhub=GT 5.7.6 By PAUL ELIAS Associated Press STRASBURG, Pa. -- Smack dab in the middle of a central Pennsylvanian cornfield, in the heart of an Amish culture that typically shuns technology, sits a marvel of genetic medicine and science. The building itself, a tidy clapboard structure, was raised by hand, rope and horse in the Amish way 16 years ago. Upstairs, is the Clinic for Special Children. Downstairs houses the Amish Research Clinic. The clinic has played a role in numerous significant discoveries by expert gene hunters, from diabetes breakthroughs to unlocking some of the mysteries behind sudden infant death syndrome. The gene hunters, who come from far and wide, spend countless hours rooting through a rich genetic trove that only an insular genetic pool like the Amish can offer. To the Amish, many of whom travel the few dozen miles or so from their homes by horse and buggy, the clinic has been heaven sent. It very often saves their children, who are disproportionately afflicted by rare and sometimes fatal genetic-based diseases because of 200 years of inbreeding. "It's weird and it's wonderful," said Terry Sharrer, medical curator of the Smithsonian Institution in Washington D.C. "I have never seen anything like this." The children's clinic is the creation and life's work of Dr. Holmes Morton and his wife Caroline. The Harvard-educated couple surprised colleagues and friends in 1987 when they announced they were giving up prestigious urban posts in Philadelphia, packing up the family and starting a new life among the Amish and Mennonite religious sects. It's a place where the laundry of plain clothes flaps in the breeze and barefoot children in smocks and straw hats run around homes shared and passed down by multiple generations. Road signs warn drivers to share the road with the horses and buggies. Morton hasn't regretted the move. "We discover a new gene almost weekly," he said. Isolated populations with homogenous genes such as the Amish in central Pennsylvania, the Ashkenazi Jews and Indian tribes offer genetic researchers unparalleled insight into disease and genetics. These closed populations, whether by geography or religion, were created by just a few families -- called the "founder effect" -- and built on generations of inbreeding. The Amish have higher rates of inherited disease caused by bad, recessive genes that are diluted in the general population but remain captive in closed societies. That increases the odds that distant relatives that are each carriers of a rare disorder will marry and produce afflicted children. Since the human genome was mapped five years ago, the genetic discoveries are coming fast and furious in Strasburg. The advent of the increasingly powerful gene chips, which enable researchers to experiment with thousands of genes simultaneously, have also advanced Morton's work. Morton estimates that he's uncovered about 150 genes implicated in various diseases, most of them found in the last few years. Last year he found a gene implicated in sudden infant death syndrome. He's also uncovered a genetic cause for the malady maple syrup urine disease, so-called because the victim's urine is sweet smelling. It's a rare enzyme deficiency that if left untreated, as it was for many years in the Amish community, will lead to mental retardation. Through a severe diet that excludes meat, eggs and milk -- and constant vigilance -- Morton can keep the disease in check. Much of Morton's funding is raised by community auctions that sell quilts, furniture and baked goods made and donated by the Amish. Downstairs, Dr. Alan Shuldiner and his colleagues at the Amish Research Clinic are armed with $10-million (U.S.) in National Institutes of Health grants to conduct a dozen different large-scale studies of the Amish, including diabetes, heart and longevity studies. Shuldiner, also a researcher at the University of Maryland School of Medicine, says his lab has drawn the blood of 3,000 of the 30,000 Amish who live in the area. Shuldiner opened his lab in 1995 after spending a year working out of his car. He initially befriended an Amish woman who had children with diabetes. She served as his liaison to a community skittish of outsiders. When he moved into the special children's building, he said his credibility among the Amish was cemented. "This building is really a pillar of the Amish community," he said. Mary Morrisey, a nurse in Shuldiner's lab, spends most days whipping around the back roads of Lancaster County in her minivan on a mission to enroll 1,000 Amish. The aim is to uncover genetic causes of heart disease. In two years, the lab has enrolled nearly 600 volunteers -- a testament to how massive the undertaking is. On Wednesday, Morrisey spent two hours at the kitchen table of one family's house, drawing blood and explaining the intricacies of the study to the pair, who are in their mid-60s who have nine children and 54 grandchildren. The screen door was constantly slamming as barefoot kids frolicked about the house, the younger ones fretting about needles being stuck into their grandparents' arms. Grandma soothingly reassured them in the Pennsylvania Dutch they use with each other. For a Luddite community that by and large quits school after the eighth grade, the Amish are well-informed about the technological breakthroughs their blood contains. They view their participation with the "English" scientists as in keeping with the tenets of their branch of Christianity, which demands they help their fellow man. "I wouldn't know why not," the woman responded when Morrisey asked her to join the study. "It could help our family -- and help others." The couple had participated in Shuldiner's initial diabetes study several years ago. "I think we're considered vampires," Morrisey joked. "All we want is their blood. They instinctively roll up their sleeves every time they see me." From anonymous_animus at yahoo.com Sat Jul 9 18:41:35 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Sat, 9 Jul 2005 11:41:35 -0700 (PDT) Subject: [Paleopsych] Iraq war In-Reply-To: <200507091800.j69I08R16594@tick.javien.com> Message-ID: <20050709184135.73133.qmail@web30810.mail.mud.yahoo.com> >>If these bombers are Iraqi nationalists, there is hope that our departure from Iraq will cool the conflict. The thought of regional terrorists in the Middle East is indeed terrifying.<< --It's obviously a mixture, which brings up the question: will foreign terrorists transform Iraqi nationalists who would otherwise fight conventionally into terrorists who believe killing civilians is legitimate warfare? In the current situation, anyone who proves himself to be capable of killing US troops is likely to gain respect, but every bombing that targets Iraqis will most likely decrease respect for foreign terrorists. Has the number of Iraqi suicide bombers (targeting civilians rather than soldiers) increased since Zarqawi set up shop? Conventional fighters who target soldiers would likely taper off their efforts when US troops leave and Iraqis take up the job of security, allowing Sunnis and borderline resistance members to take part in government without losing face (many would feel that cooperating with a US-sponsored Iraqi government would be humiliating and an admission of defeat... removing US troops would remove that motive as well). But foreign terrorists aren't going to leave unless Iraqis as a people stand against them. That becomes more likely every time Iraqis are targeted by terrorists, so the logical strategy for the US is to stay until Iraqis are known to be solidly against the presence of foreign terrorists and Iraqi nationalists begin to separate themselves from foreign terrorists by denouncing attacks on civilians. Any Iraqi resistance fighter who feels it's dishonorable to kill civilians is going to be less and less comfortable being associated with Zarqawi's terrorists, more likely to develop rifts with foreign organizers and more likely to leak information leading to the capture of Zarqawi and other terrorist leaders. That pretty much sets up a series of events leading naturally to the war's end. Along with a global denunciation of terrorism by mainstream Muslims and renewed focus on the Israeli-Palestinian partition process, the general trend should be positive, as long as the US and Iran don't provoke one another into another war in the meantime. Bush will watch to see if Iranians stand up to their hardliners or bide their time thinking the US will intervene. With nobody in either party willing to impose a draft, an invasion of Iran is unlikely, leaving air strikes by the US or Israel as the only option. Would an air strike against Iran's government targets or nuclear facilities produce a larger pool of global terrorists seeking nuclear or biological weapons? Or would Iranian moderates take over immediately? Or both? One assumption that should be eliminated is that there is some fixed number of terrorists, and that it's a good thing to draw them all into Iraq to fight them on their own ground. That logic makes sense at first glance, but it's based on an assumption that's not safe to make. But regardless of how one analyzes the overall situation, the immediate solution to Iraq is for the US to stay until there is good reason to believe Iraqi moderates can establish security and prevent foreign terrorists from gaining influence and using Iraq as a training ground. Once Zarqawi is captured or killed and Iraqi resistance members begin shunning foreign terrorists, it will be a lot easier for the US to leave, and attention can be focused on Iran. I don't really trust the current administration to handle it gracefully, but we have what we have. Hopefully enough systems thinkers will focus on geopolitics to provide a counterweight to the gung-ho mentality that will want to rely on forceful moves that may backfire in the long term. We've tended to rely on those moves in the past, doing whatever seemed strongest in the short term, like a beginner chess player who takes every piece that's offered. The end result is messy. Strong moves made hastily can add up to a weak foundation. Michael ____________________________________________________ Sell on Yahoo! Auctions ? no fees. Bid on great items. http://auctions.yahoo.com/ From checker at panix.com Sun Jul 10 15:59:07 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 11:59:07 -0400 (EDT) Subject: [Paleopsych] NYT Mag: Will Any Organ Do? Message-ID: Will Any Organ Do? New York Times Magazine, 5.7/10 http://www.nytimes.com/2005/07/10/magazine/10ORGANS.html By GRETCHEN REYNOLDS Last summer at one hospital in Dallas, four people died from rabies, an unheard-of level of incidence of this rare disease. As it turned out, each patient was infected by an organ or tissue -- a kidney, a liver, an artery -- that he or she received in a transplant several weeks earlier. Their shared donor, William Beed Jr., a young brain-dead man, had rabies, caught apparently through a bite from a rabid bat, something the surgeons never suspected. They all thought he had suffered a fatal crack-cocaine overdose, which can produce symptoms similar to those of rabies. ''We had an explanation for his condition,'' says Dr. Goran Klintmalm, a surgeon who oversees transplantation at Baylor University Medical Center, where the transplants occurred. ''He'd recently smoked crack cocaine. He'd hemorrhaged around the brain. He'd died. That was all we needed to know.'' Since the rabies deaths, recriminations have flown, procedural reviews have begun and sorrow and regret have dogged the families of the organ recipients. But the outbreak also exposed a controversy that until then was roiling only the rarefied world of transplant specialists. The issue, although freighted with monetary and bio-ethical complexities, can be boiled down to one deceptively simple question. Should transplant surgeons be using organs from nearly anyone? Organ transplanting has become, in fundamental ways, a victim of its own success. Not long ago, transplant surgery was a dodgy, last-ditch response to end-stage kidney failure. But with the advent of better antirejection drugs and surgical techniques, transplantation has become the treatment of choice for a growing range of conditions, including chronic kidney failure, end-stage lung or liver disease and some congestive heart failure. Kidneys are implanted routinely, as are increasing numbers of livers, hearts and pancreases. Fifteen years ago, about 20,000 people in the United States were on waiting lists for organs. Today, about 88,000 are. The number of donors has not come close to keeping pace. There were about 15,000 transplants completed with organs from cadavers in 1993 and about 20,000 last year. Patients used to wait weeks for an organ. Now they wait years. On average, 18 people on organ waiting lists die every day. Doctors, patients and politicians concerned about transplantation have responded with proposals for increasing donations. In 2002, the American Medical Association voted to endorse pilot projects to give families financial incentives, like cash payments to help cover the costs of funerals, for donating their deceased loved ones' organs. The next year, Congress held hearings on the topic. Representative James Greenwood, Republican of Pennsylvania, introduced a bill that would have authorized demonstration projects to determine whether offering financial incentives to families of brain-dead patients would increase donation rates. There was a public outcry against ''buying'' organs and the bill died. (A few states offer tax incentives to families who donate relatives' organs.) Increasingly desperate people in need of transplants have turned to highway billboards and Internet sites to solicit donors. Donations from living people have helped. Today the number of living kidney donors is greater than the number of dead donors. But living donations of other organs are rare because they can be dangerous or are impossible. All of which has led transplant specialists to quietly begin to relax the standards of who can donate. As a result, according to surgeons I spoke with and reports in medical journals, the transplanting of what doctors refer to as ''marginal'' or ''extended criteria'' organs, organs that once would have been considered unusable, has increased considerably in the last several years. The definition of a marginal organ differs from transplant center to transplant center and also from one type of organ to another. This makes it difficult to quantify the increase in the use of these organs. But the expansion is undeniable and has become a much-discussed issue in the field, a topic of ethics papers, surgical conferences and soul-searching on the part of many of the surgeons involved. Fifteen years ago, William Beed Jr. would not have qualified as an organ donor. When he died in May 2004, he was 20, unemployed and had been living with his mother and sister in a bat-infested apartment building in Texarkana, Ark. Throughout his life, Beed had been in and out of trouble, his mother acknowledged when I spoke to her recently. Marijuana and cocaine were found in his urine at the time of his death, according to a report in The New England Journal of Medicine. Beed's drug use alone would have disqualified him as a donor. (It still would keep him from giving blood.) ''What people have to understand is that donors now, except for the 75-year-olds who die of intracranial bleeds, are not part of the church choir,'' Klintmalm told me when I met with him in Dallas earlier this year. ''The ones who die are the ones you don't want your daughter or your son to socialize with. They drink. They drive too fast. They use crack cocaine. They get caught up in drive-bys.'' The donor pool was different in the early days of transplantation. Beginning in the 60's and through the 80's, a majority of donors were head-trauma victims, people who had been involved in car accidents, botched suicides or tumbles off horses or ladders. These donors were almost all young, between 15 and 45. (In the 80's, few transplant surgeons would take a 50-year-old organ.) They were of average weight, with no history of diabetes, cancer, infectious disease, imprisonment, high blood pressure, cigarette-smoking habits, tattoos (which have been associated with blood-borne illnesses) or unsafe sexual behaviors. The chosen organs, said Klintmalm, who has been in practice for about 25 years, ''were pristine.'' It was easy to adhere to those standards at first. ''We didn't perceive any shortage of organs back in the day,'' says Dr. Nicholas Tilney, the Francis D. Moore professor of surgery at Harvard Medical School and one of the nation's premier kidney-transplant surgeons. ''If a patient had to wait a few weeks for a kidney, that seemed long. We never foresaw the kind of situation we have today.'' Conditions began to change in the 90's. Seat-belt use was more common by then, and fewer Americans were dying of head injuries, depriving transplantation of its most reliable sources of pristine organs. At the same time, the demand for transplants was growing. Surgeons had little choice but to start looking to alternative sources for organs. On April 28, 2004, William Beed Jr. complained to his mother that he was feeling sick. ''He couldn't swallow,'' his mother, Judy, a practical nurse, recalled when I spoke with her earlier this year. They decided he should go to an emergency room, she said, and the doctors there examined him and sent him home with medication, saying he was dehydrated. By that evening, he was drooling, throwing up, shaking and still having difficulty swallowing. His fever was rising. He started vomiting blood. His father drove him to another E.R. Diagnosis is often a matter of context. Because of doctor-patient confidentiality rules, doctors involved with this case would not talk about it on the record, but a few did say that had Beed not had cocaine in his blood, the E.R. doctors might have investigated his symptoms more aggressively instead of assuming he had overdosed. (Because no autopsy was done, doctors have not been able to establish whether the rabies or the drugs actually killed him.) Soon after, Beed fell into a coma and was put on a ventilator. After a few days, his mother said, the doctors told her and her family that their son was brain-dead. Transplant surgeons use organs from brain-dead patients because they still have a heartbeat, and if the patients are placed on a ventilator, their organs continue to get oxygen. Without oxygen, the organs degrade within minutes. According to Judy Beed, a transplant coordinator approached her and asked whether she would be willing to donate her son's organs. She agreed, and in the middle of the night on May 4, the parents of Joshua Hightower received a phone call offering them William Beed's kidney. Joshua Hightower, who lived in Gilmer, Tex., had had kidney problems since he was 2. They had grown progressively worse over the years. ''When he was 16, things got really bad,'' said his mother, Jennifer Hightower, a special education assistant in the public schools, when I met with her in February. ''He was pale and droopy. He weighed 112 pounds. He was sleeping all the time.'' His teachers at Gilmer High School walked him up and down the halls between classes to help him stay awake. A doctor urged his parents to get him on the waiting list for a kidney. In the meantime, Joshua began daily dialysis at home. The process, which purified his blood of toxins, required that he be home every evening by 10. Once there, he was tethered to the dialysis machine for between 9 and 16 hours. When the Hightowers received the call from the hospital, they jumped at the opportunity. It is impossible to know now when the first less-than-pristine organ was retrieved and transplanted. But over the course of the 90's, according to surgeons I spoke with, many barriers fell. Age was almost certainly the first to go. Instead of accepting donors 45 and younger, some transplant centers began, gradually, to take those who were 48, 49, 50 and then up from there. ''I wrote a paper for The Journal of the American Medical Association back in 1989,'' Dr. Lewis Teperman, director of transplantation at New York University Medical Center, told me when I talked to him earlier in the spring. ''It was looking at the outcomes of using older donors. By older donors, we meant someone over 60. That was considered really, really old.'' Recently, N.Y.U. transplanted a liver from a deceased 80-year-old. A couple of years ago, a Canadian hospital used a 93-year-old liver from a deceased donor. Almost imperceptibly, most of the other traditional prohibitions evaporated. Surgeons started accepting lungs from people who had smoked, sometimes for decades. They accepted hearts and kidneys from those who had had high blood pressure or had been obese. They took organs from alcoholics and drug users. (Because cocaine is flushed from the body relatively quickly, it is considered one of the least problematic drugs in donors.) Infectious disease was no longer an automatic disqualifier, either. Most surgeons would have once discarded organs from someone with hepatitis C, for instance, since it destroys the liver. But the virus, often spread by injected drug use, is now so common in urban areas that few transplant surgeons will immediately turn down an organ infected with it. Ideally the surgeons implant these infected organs into patients who already harbor hepatitis C. But lately there have been cases in which doctors, as a last resort, have transplanted infected livers into patients who don't have hepatitis C. There is little published data yet about the long-term outcomes for these patients. The expansion into ''marginal'' or ''extended criteria'' organs has not been systematic. One transplant surgeon will use a marginal organ from, say, a morbidly obese donor or a drug user. His patient survives. Then he will repeat it again and again. At the next big transplant conference, he will talk to his colleagues about his success, and they will go back to their own transplant centers and accept, for the first time, an obese donor or a crack-cocaine user. ''You sometimes have to experiment,'' Klintmalm says. Klintmalm and other surgeons I spoke with who work in urban areas say that marginal organs are well on their way to being the majority of organs they transplant. Klintmalm, though, takes issue with the very definition of marginal. ''Older organs should not be called 'marginal,''' Klintmalm maintains, referring to donors over age 55. ''They're standard for us.'' But two years ago, when the United Network for Organ Sharing (UNOS), the private organization that oversees organ transplantation in the United States, published its first definition of extended-criteria organs, age was prominent. The UNOS classification, which applies only to kidneys, defines a marginal kidney as one that comes from a deceased person over 60 or one over 50 with two of three characteristics: stroke, hypertension or abnormal kidney function. The definition does not mention smoking, diabetes, hepatitis, alcoholism, obesity or drug use. No government agency sets standards for what makes an organ acceptable. The Department of Health and Human Services contracts with UNOS to handle the day-to-day logistics of the transplant system (getting organs to the next person on the list and so on). But the government's main concerns in policing transplants are that donors and recipients be matched for blood type and that organs be distributed primarily based on medical need, not the wealth, race or celebrity of the recipients. So decisions about whether organs are usable are made on the spot by individual surgeons. To date, not many peer-reviewed studies have been published that examine the long-term outcomes of using marginal organs. The research that has been done mostly looks at kidneys. Recent studies of older kidneys (usually defined as over 50), for instance, have shown that they can function almost as well as younger ones. They don't work for as long, however. In a report presented by UNOS, which adjusted for the health of the recipient, among other things, about a third of extended-criteria kidneys failed within three years. (About 20 percent of non-extended-criteria organs also failed within three years.) Transplantation, even under the best of circumstances, still involves risk. In assessing marginal organs, it is difficult to know whether a bad outcome -- the recipient's death or the organ's failure -- was caused by the organ, the surgery or the fragile health of the recipient. Except for age-related research, few large-scale studies have yet investigated the effects of other extended-criteria kidneys. Do kidneys from diabetics, the obese, alcoholics, smokers or drug users generally work over the long term? Surgeons and scientists can't say for sure. There is even less information about imperfect livers, hearts or lungs. Surgeons do know that livers, for some reason, don't age at the same rate as their original owners. Sixty- or 70-year-old livers can be in fine shape. Hearts and lungs aren't as durable and are more likely to fail as they get older. But surgeons are using them. A 2003 report by the UNOS-administered Organ Procurement and Transplantation Network stated: ''The need to more agressively utilize available organs for the candidate population as a whole competes with the expectation of each individual.'' And this is, ultimately, the crux of the matter. The marginality of any given organ is relative. It depends on how sick the waiting recipient is. There is a kind of mad, desperate arithmetic that goes into calculating whether to use a marginal organ and when. ''We're all trying to quantify the risks,'' Lewis Teperman, the N.Y.U. transplant director, says. ''If we know that there's a 0.7 increase in relative risk of an extended-criteria organ failing, which is about what we've seen in kidneys so far, you take that number, look at your patient's chances for survival, which might be 90 percent with a perfect organ and 80 percent with an extended-criteria one and. . . . '' He trails off. ''It sounds very clinical when I put it like that, which isn't what I want.'' He starts again. ''It's easy enough to come up with these kinds of calculations. But it's difficult for any of us to apply them in practice, when we're dealing with very sick people's lives.'' Dr. Marlon Levy, a liver-transplant surgeon in Fort Worth and the medical director for the Southwest Transplant Alliance, the group that unwittingly collected and distributed the rabid organs last year, told me: ''You have this immensely complex weighing of benefits and risks in each of these cases. Is the recipient sick enough to justify using any organ, even a really marginal one, to try and save his life and give him a few more years? Or say you have a slightly healthier patient, and you think he's doing well enough to pass on a marginal organ and wait for a better one. Then, suddenly, he develops complications and dies before another organ becomes available. Were these decisions wrong?'' It is extremely difficult to predict outcomes. ''The best thought-out decision doesn't work out all the time,'' Teperman says. ''I have put in extended-criteria organs that worked perfectly, and the person walked out the door a week later. Other times, a patient has gotten an extended-criteria organ and remained hospitalized for months. I've also waited, thinking a better organ would come along, and the patient has died in the meantime.'' To some extent, surgeons' hands are tied. In general, the current system requires that the most desperately ill patient must get the next organ that comes in, whether it is the best organ for that patient or not. ''Things would work best if we could put the most extended-criteria organs into the less critically ill patients and the healthiest organs into the sickest patients,'' Teperman says. The calculus may be even more complex from the patient's perspective. Dr. Grant Campbell, an epidemiologist with the Centers for Disease Control and Prevention, had a liver transplant in 1990. At that time, he was chronically ill and knowingly accepted an organ infected with cytomegalovirus, a common and usually mild disease but one that can be serious in immunosuppressed transplant patients. Fortunately, he didn't become sick. Even the most rational attempts to weigh the risks and benefits of marginal organs tend to fall apart in the face of truly boundless human despair. ''We would have taken any lungs,'' said Harry Littlejohn, 59, of Lewisville, Tex., whose 28-year-old daughter, Carmen, died in 2001 of cystic fibrosis. She had been No. 1 on the state waiting list for new lungs for eight weeks by then. None became available. ''We would have done anything to save her,'' he said, ''anything. But there was nothing we could do.'' Joshua Hightower turned 18 on May 10, 2004, in the transplant recovery ward at Baylor University Medical Center. Photos from around that time show him propped up in bed, looking wan, but smiling. Joshua had been added to the lengthy transplant waiting list the year before. The doctors said they could not estimate how long the wait would be, Jennifer Hightower, his mother, told me. After the Hightowers received the call from the hospital, his mother recalled, she had wondered about the donor. Anonymity has been crucial to the workings of the organ-transplant system. Donation is supposed to be a blind act of altruism. Donor families aren't told at the time who will receive the organs, and recipients generally are told only the age and sex of the donor. ''You don't want people coming in and saying, 'I'll only donate to Italians.' Or 'I only want them to go to someone in the Ku Klux Klan,''' says Sheldon Zink, director of the program for transplant policy and ethics at the University of Pennsylvania. You also don't want recipients turning down organs because of their own biases. But how much should a surgeon tell a patient who is about to receive a compromised organ? Should he explain that the new kidney comes from a retiree, a drug user or an alcoholic, a chain smoker or a member of a motorcycle gang? Does he have to tell a patient that the organ he is about to receive is considered marginal? "I wish we had been told more,'' Jennifer Hightower says. Her son, she went on to say, would have declined the kidney had they known more about Beed's background and his death. Joshua, she says, was not so sick that he couldn't wait. ''I would have made him pass on it.'' Her attitude worries Zink, the ethicist. ''I would question anyone's motivation in refusing an organ from a drug user,'' she told me. ''They aren't responding to clinical information, because the available clinical data'' -- the anecdotal reports from doctors -- ''indicates that organs from crack-cocaine users are fine, in general. So they must be responding to preconceptions about that person's lifestyle. That's only one small step from declining an organ because the donor is black or Hispanic.'' At the moment, no formal national medical standards dictate what transplant surgeons should tell their patients about organs other than kidneys or what they can withhold. Each doctor makes that decision based on how he feels about the ethics of the situation. ''I believe in erring on the side of telling the patient as much as possible,'' Teperman says. ''We have a lengthy consent form here at N.Y.U., and it goes into the use of marginal organs. We ask patients if they will accept one. You don't want to be calling someone at 2 a.m. and saying: 'You can take this organ we just got in that may not be very good or you can wait and maybe die. What do you want to do?' That's an unrealistic burden to put on a patient. We try to have the conversation early on, when patients are a little more clearheaded. That's not always an easy conversation to have. Some patients would rather not think about it. They'd rather the doctor just make the decision for them.'' Some surgeons insist on making decisions about marginal organs unilaterally. ''There are transplant surgeons who think they absolutely know best,'' Zink says. ''They don't bother asking the patient if he wants a marginal organ because they don't want the patient having a choice. They make it for him.'' When Zink recently asked surgeons at a major transplant conference how many of them always tell their patients if they are about to implant a marginal organ, ''about half said they tell the patient,'' Zink told me. ''Half said they don't.'' Some surgeons withhold information because they are concerned about litigation (better to say nothing than to say that an organ might be compromised, have your judgment proved right and be sued for it). Others are prodded by compassion. ''There are doctors out there who think that a patient will recover better if he isn't worrying about the quality of the organ inside of him,'' Zink says. Wry pragmatism also plays a role. ''At some large urban transplant centers, virtually all organs nowadays are extended-criteria organs,'' Zink points out. Why discuss the option of accepting or declining an imperfect organ? If a patient says he doesn't want one, he'll most likely never get an organ at all. ''I've had doctors tell me they don't even tell their patients that they're about to get an organ that might be infected with hepatitis C because so many of the donated organs may have it,'' Zink says. On Friday, May 28, 24 days after his transplant, Joshua Hightower, who had been released from the hospital, graduated from high school. He clutched his diploma, climbed up into the stands and threw up, Jennifer Hightower said. He didn't stop vomiting all through the celebrations that followed. The next day, he was stumbling, and by the evening, he was having convulsions. Spit dribbled down his face. Doctors at the nearest emergency room hurriedly transferred him to the E.R. at Baylor. Upstairs in the transplant wing, around the same time, three other patients who had received donations from William Beed Jr. lay dying, each with convulsions, delirium or pain. Within two weeks, all but Joshua were dead. Rabies was confirmed as the cause of death a few weeks later. There is no formal system that tracks the short-term fate of individual organs from a particular donor. Surgeons report raw data about deaths and severe surgical complications to UNOS. Had all of the people who received an organ from William Beed Jr. not come back to the same hospital and died, one after another, their rabies may not have come to light. In May, three people died who had received organs from the same donor in New England. As it turned out, the donor had passed along lymphocytic choriomeningitis virus, a rare illness transmitted to humans from rodents like hamsters. Two of the recipients, after getting ill, went to the same hospital, which helped doctors there determine that the transplant was the cause. ''I doubt very much that this is the only time'' that rabies has killed transplant patients, says Charles Rupprecht, the C.D.C.'s rabies expert about the Beed case. ''And I doubt that it will be the last.'' In February, doctors in Germany announced that four patients there had been infected with rabies after receiving organs from a rabid young woman who had died, they had thought, of a heart attack associated with an overdose of cocaine and Ecstasy. ''Rabies is a sentinel disease,'' argues Dr. Matthew Kuehnert, the assistant director for blood safety at the C.D.C., who has studied outbreaks of disease in transplant recipients. ''It tells us we should be paying attention, that something needs to change.'' What, though? ''We cannot start testing every donor for rabies or any of the other once-in-a-lifetime diseases that might crop up,'' Klintmalm says. ''We don't have time. It would cost too much. You might as well shut down every transplant center. If another case came in today exactly like that one, a young man who used crack cocaine and died, I would not demand more explanation. Why? We'll never get the risk of transplants down to zero. It's stupid to pretend we can. That young man appeared to be a perfect donor. I wish we had more like him.'' The broader question is what, if anything, should change in transplantation as marginal organs become everyday organs? ''We at the C.D.C. wish that there were more formal disease surveillance and follow-up of transplant patients,'' Kuehnert said. ''We simply don't know the risks of using certain types of donors at this point.'' The C.D.C. has no authority to require such follow-up and study, though. Only other regulatory agencies within the Department of Health and Human Services or state agencies can set such mandates. In June 2004, the New York State Department of Health became the first regulatory agency in the country to start formally looking into the growing use of marginal organs and to formulate recommendations about what patients should be told and what kinds of organs should be allowed. Its report is due soon. In the meantime, the United Network for Organ Sharing has created a designation for patients who say they will accept a marginal kidney. At the end of February, 42 percent of the adults waiting for a kidney in the United States said they would take a marginal organ. A year ago, while Joshua Hightower lay unconscious but alive, the doctors decided to surgically remove his transplanted kidney. But by then, rabies (not yet identified as the culprit) was everywhere in him. His condition worsened. On June 18, a Friday, doctors tested for brain activity. They found none and declared him brain dead. Stung with grief, Jennifer Hightower and the rest of her family sat with the boy through a wrenching weekend while he remained on a ventilator. On that Monday, his parents agreed to end life support. That afternoon, with his family watching, doctors turned off the ventilator. His mother held him as his heart stopped. It will not be a simple matter in the years ahead to decide how best to save lives with transplants. At some point this year, the number of people on transplant waiting lists in the United States will very likely top 100,000. Unless there is an enormous effort, probably from the federal government, to increase organ donation, the shortage will only grow. ''All these kids we see with diabetes,'' Nicholas Tilney says, ''so many of them will need a new kidney in a few years. Where are those organs going to come from?'' Gretchen Reynolds frequently writes about medical topics. Her last article for the magazine was about epidemiologists tracking the avian flu. From checker at panix.com Sun Jul 10 15:59:16 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 11:59:16 -0400 (EDT) Subject: [Paleopsych] NYT: The Half-Life of Anxiety Message-ID: The Half-Life of Anxiety http://www.nytimes.com/2005/07/10/weekinreview/10carey.html By [3]BENEDICT CAREY FOR all their murderous power, the four terrorist bombs detonated in London on Thursday morning have not created anything close to mass panic. It's possible to imagine a scene straight out of the movie "War of the Worlds," an unraveling of society, with people disoriented, afraid for their lives, holing up in their basements or fleeing the city. Instead, on Friday morning, a day after the bombing, Londoners were beginning to return to daily routines, some even riding the buses and subway trains. Although real terrorism is life-shattering to those directly affected and may help attackers achieve political goals - last year's bombing in Madrid, for example, may have helped lead to the withdrawal of Spanish troops from Iraq - the attacks almost never sow the kind of lasting confusion and mass anxiety that the perpetrators presumably want. In Israel, the damage from cafe and bus bombings is typically cleared within hours. In Lower Manhattan, real estate prices have only spiraled upward since the Sept. 11 attacks; the average sale in TriBeCa last year was almost $1.7 million, 16 percent higher than in 2003. And a recent report found that tourism had increased in Madrid since the bombings. "It says something that it is hard to think of any attack that truly caused a city to cease to function, except perhaps Dresden, Hiroshima or Nagasaki," said Dr. Lynn Eden, a senior research scholar at Stanford University's Institute for International Studies and author of the book, "Whole World on Fire," an analysis of military bombing and damage predictions. Are Western cities themselves so resourceful and structurally sound that they can absorb just about any blow? Are people adaptable enough that they can live with almost any threat? Or, can certain kinds of threats deeply unsettle a worldly population? Strangely enough, the answer to all three questions is yes. Certainly, an attack of the magnitude of last week's in London creates a climate of fear, and no one in England is likely to forget the carnage; July 7 is certain to carry in the English consciousness some of the same resonance as Sept. 11 does in this country. But terror groups like Al Qaeda are widely thought to be after bigger game - the psychological unraveling, or loss of confidence, in Western society. And high explosives have not done the trick. People understand bombs, for one thing; they know what the weapons can do, and why certain targets are chosen. This allows residents to feel that they have some control over the situation: They can decide not to take trains at rush hour, avoid buses or drive a car, psychologists say. "Unfortunately, and I think people sensed this in watching the coverage in London, bombings have become familiar," and, as such, less frightening to those not directly affected, said George Loewenstein, a professor of psychology and economics at Carnegie Mellon University in Pittsburgh. And for all their flaws, Western governments typically respond immediately to terror, which is far more psychologically soothing than many people admit. It's the reason Prime Minister Tony Blair flew back to London from the Group of 8 conference in Gleneagles, Scotland. And it's the reason both Rudolph Giuliani and Winston Churchill became national heroes. All the same, it's clear that people in much of the West believe that their societies are fragile, and capable of breaking down. Indeed, as the new millennium approached, there were fears that a large-scale computer meltdown would paralyze hospitals, police and other basic services. And the most unsettling thing about the current brand of extremist Muslim terror is the certainty that the enemy will try anything - including using weapons whose psychological effects are entirely unknown. Even small changes in weaponry can be deeply unnerving. In her history of London during World War II, "London 1945," Maureen Waller describes how Londoners, long accustomed to take cover from the roar of bombers overhead, plunged into confusion when first hit with Hitler's missiles, the V-1 buzz bomb and the V-2 rocket. "By some acoustic quirk, those in its direct path barely heard a V-2," she writes. "If you did hear it, it had missed you. But that knowledge did nothing to quell the primeval fear each time one exploded." The missiles were far more terrifying than the conventional bombardments, Ms. Waller adds. "Life was uncertain again." Bioterror scenarios are the most obvious modern-day example of such terrifying ambiguity. Despite only a handful of deaths, the anthrax poisonings in 2001 created a rip current of anxiety for millions anytime they opened their mailboxes. Studies find that this kind of free-floating concern, when written across neighbors' or colleagues' faces, is contagious, Dr. Loewenstein said. A similarly frightening mystique might surround the so-called dirty bomb, a conventional explosive containing some radioactive material. A dirty bomb is not a nuclear bomb, as many people assume, and can inflict nowhere near the amount of damage or radioactive contamination, said Dr. Irwin Redlener, director of the National Center for Disaster Preparedness and a professor at Columbia University. While a nuclear bomb could devastate much of the city with its blast and radiation wave, a dirty bomb is a local device - a car bomb, say, that could contaminate a specific area, like Times Square. "There is a whole lot of mythology associated with any nuclear device, and a tendency for people to confuse a dirty bomb with a nuclear bomb, and we just don't know how people will react," Dr. Redlener said. "For instance, would people decide to come back to work and live in an area hit by a dirty bomb?" The widespread revulsion to any hint of radiation, he said, lends the dirty bomb both an ominous novelty and mystery that are much more likely to induce life-altering psychological anxiety than a conventional bomb would. Although such sustained and uncertain threats may fall short of bringing a city to a standstill, they could shatter social networks and slow an economy, experts say. People may still ride the buses, take their children to school and go to work, but a community under continuous assault often turns on itself, with neighbors distrusting one another, research suggests. In studies of Alaskan communities that were affected by the oil spill from the tanker Exxon Valdez in 1989, and of towns dealing with water contamination in New Jersey and New York, sociologists have found what they call social corrosion. Sustained anxiety breaks down social groups and leads to an increase in mental health problems and potentially to economic downturn, said Lee Clarke, a sociology professor at Rutgers University and author of the forthcoming book, "Worst Cases," an analysis of responses to disaster. Beyond the unknown, many people wonder whether city residents would stick around if terrorists successfully staged not one bombing but a series of major attacks in a short period of time. Certainly after Sept. 11, many people openly wondered whether another big attack - a double or triple hit - might be just enough to cause a kind of collective mental breakdown, an exodus. Maybe. But in the absence of new species of horror, the histories of Jerusalem, Tel Aviv, Belfast and London still suggest otherwise. "Even if we hypothesize attacks like this for a week, what would happen?" said Dr. Eden. "They would shut down the subway, let's say, and my guess is that there would be a run on bicycles. There would be a difficult adjustment period, there would be some economic ramifications, but people would learn to function." From checker at panix.com Sun Jul 10 15:59:21 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 11:59:21 -0400 (EDT) Subject: [Paleopsych] NYT Mag: Euthanasia for Babies? Message-ID: Euthanasia for Babies? New York Times Magazine, 5.7.10 http://www.nytimes.com/2005/07/10/magazine/10WWLN.html [I am enthusiastic about euthanasia myself and would not only place very few restrictions on it, but encourage the ending of lives without pleasure, engagement, or meaning (the three ingredients of happiness). Not that I'm eager to euthanise anyone myself. I would be reluctant to defy deeply-held attitudes, many of which I've internalized. It's that I'd prefer to live in a society where these attitudes have changed so as to be more supportive of euthanasia. Please understand this distinction.] By JIM HOLT One sure way to start a lively argument at a dinner party is to raise the question Are we humans getting more decent over time? Optimists about moral progress will point out that the last few centuries have seen, in the West at least, such welcome developments as the abolition of slavery and of legal segregation, the expansion of freedoms (of religion, speech and press), better treatment of women and a gradual reduction of violence, notably murder, in everyday life. Pessimists will respond by citing the epic evils of the 20th century -- the Holocaust, the Gulag. Depending on their religious convictions, some may call attention to the breakdown of the family and a supposed decline in sexual morality. Others will complain of backsliding in areas where moral progress had seemingly been secured, like the killing of civilians in war, the reintroduction of the death penalty or the use of torture. And it is quite possible, if your dinner guests are especially well informed, that someone will bring up infanticide. Infanticide -- the deliberate killing of newborns with the consent of the parents and the community -- has been common throughout most of human history. In some societies, like the Eskimos, the Kung in Africa and 18th-century Japan, it served as a form of birth control when food supplies were limited. In others, like the Greek city-states and ancient Rome, it was a way of getting rid of deformed babies. (Plato was an ardent advocate of infanticide for eugenic purposes.) But the three great monotheistic religions, Judaism, Christianity and Islam, all condemned infanticide as murder, holding that only God has the right to take innocent human life. Consequently, the practice has long been outlawed in every Western nation. This year, however, a new chapter may have begun in the history of infanticide. Two physicians practicing in the Netherlands, the very heart of civilized Europe, this spring published in The New England Journal of Medicine a set of guidelines for what they called infant ''euthanasia.'' The authors named their guidelines the Groningen protocol, after the city where they work. One of the physicians, Dr. Eduard Verhagen, has admitted to presiding over the killing of four babies in the last three years, by means of a lethal intravenous drip of morphine and midazolam (a sleeping agent). While Verhagen's actions were illegal under Dutch law, he hasn't been prosecuted for them; and if his guidelines were to be accepted, they could establish a legal basis for his death-administering work. At first blush, a call for open infanticide would seem to be the opposite of moral progress. It offends against the ''sanctity of life,'' a doctrine that has come to suffuse moral consciousness, especially in the United States. All human life is held to be of equal and inestimable value. A newborn baby, no matter how deformed or retarded, has a right to life -- a right that trumps all other moral considerations. Violating that right is always and everywhere murder. The sanctity-of-life doctrine has an impressively absolute ring to it. In practice, however, it has proved quite flexible. Take the case of a baby who is born missing most or all of its brain. This condition, known as anencephaly, occurs in about 1 in every 2,000 births. An anencephalic baby, while biologically human, will never develop a rudimentary consciousness, let alone an ability to relate to others or a sense of the future. Yet according to the sanctity-of-life doctrine, those deficiencies do not affect its moral status and hence its right to life. Anencephalic babies could be kept alive for years, given the necessary life support. Yet treatment is typically withheld from them on the grounds that it amounts to ''extraordinary means'' -- even though a baby with a normal brain in need of similar treatment would not be so deprived. Thus they are allowed to die. Are there any limits to such ''passive'' euthanasia? A famous test case occurred in 1982 in Indiana, when an infant known as Baby Doe was born with Down syndrome. Children with Down syndrome typically suffer some retardation and other difficulties; while presenting a great challenge to their parents and families, they often live joyful and relatively independent lives. As it happened, Baby Doe also had an improperly formed esophagus, which meant that food put into his mouth could not reach his stomach. Surgery might have remedied this problem, but his parents and physician decided against it, opting for painkillers instead. Within a few days, Baby Doe starved to death. The Reagan administration responded to the case by drafting the ''Baby Doe guidelines,'' which mandated life-sustaining care for such handicapped newborns. But the guidelines were opposed by the American Medical Association and were eventually struck down by the Supreme Court. The distinction between killing a baby and letting it die may be convenient. But is there any moral difference? Failing to save someone's life out of ignorance or laziness or cowardice is one thing. But when available lifesaving treatment is deliberately withheld from a baby, the intention is to cause that baby's death. And the result is just as sure -- if possibly more protracted and painful -- as it would have been through lethal injection. It is interesting to contrast the sort of passive euthanasia of infants that is deemed acceptable in our sanctity-of-life culture with the active form that has been advocated in the Netherlands. The Groningen protocol is concerned with an element not present in the above cases: unbearable and unrelievable suffering. Consider the case of Sanne, a Dutch baby girl who was born with a severe form of Hallopeau-Siemens syndrome, a rare skin disease. As reported earlier this year by Gregory Crouch in The Times, the baby Sanne's ''skin would literally come off if anyone touched her, leaving painful scar tissue in its place.'' With this condition, she was expected to live at most 9 or 10 years before dying of skin cancer. Her parents asked that an end be put to her ordeal, but hospital officials, fearing criminal prosecution, refused. After six months of agony, Sanne finally died of pneumonia. In a case like Sanne's, a new moral duty would seem to be germane: the duty to prevent suffering, especially futile suffering. That is what the Groningen protocol seeks to recognize. If the newborn's prognosis is hopeless and the pain both severe and unrelievable, it observes, the parents and physicians ''may concur that death would be more humane than continued life.'' The protocol aims to safeguard against ''unjustified'' euthanasia by offering a checklist of requirements, including informed consent of both parents, certain diagnosis, confirmation by at least one independent doctor and so on. The debate over infant euthanasia is usually framed as a collision between two values: sanctity of life and quality of life. Judgments about the latter, of course, are notoriously subjective and can lead you down a slippery slope. But shifting the emphasis to suffering changes the terms of the debate. To keep alive an infant whose short life expectancy will be dominated by pain -- pain that it can neither bear nor comprehend -- is, it might be argued, to do that infant a continuous injury. Our sense of what constitutes moral progress is a matter partly of reason and partly of sentiment. On the reason side, the Groningen protocol may seem progressive because it refuses to countenance the prolonging of an infant's suffering merely to satisfy a dubious distinction between ''killing'' and ''letting nature take its course.'' It insists on unflinching honesty about a practice that is often shrouded in casuistry in the United States. Moral sentiments, though, have an inertia that sometimes resists the force of moral reasons. Just quote Verhagen's description of the medically induced infant deaths over which he has presided -- ''it's beautiful in a way. . . . It is after they die that you see them relaxed for the first time'' -- and even the most spirited dinner-table debate over moral progress will, for a moment, fall silent. Jim Holt is a frequent contributor to the magazine. From checker at panix.com Sun Jul 10 16:01:18 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 12:01:18 -0400 (EDT) Subject: [Paleopsych] Book World: Snake Oil Message-ID: Snake Oil http://www.washingtonpost.com/wp-dyn/content/article/2005/07/07/AR2005070701757_pf.html Reviewed by Chris Lehmann Sunday, July 10, 2005; BW06 SHAM How the Self-Help Movement Made America Helpless By Steve Salerno Crown. 273 pp. $24.95 The distinctly American phenomenon of self-help is an affront on many levels. It insults our sense of moral proportion, turning petty grievances into cosmically unappeasable plaints of the spirit, to be resolved only when an elaborate (and usually quite expensive) set of affirmations is unleashed or an inner child is at last quieted. It offends our intelligence with its vapid narcissism, hymning the claustral wonders of the self while spouting undigested tracts of pseudo-mystic wisdom from East, West, North and South. Not least, it aggrieves our ear for well-turned language, with its irritating catchphrases (this or that gender being from Mars or Venus, "chicken soup for the soul," "I'm OK, you're OK"), clunky coinages (Gestalt, transactional analysis, self-actualization) and neologisms (creative visualization, codependency and, for that matter, the very term self-help, which misleadingly suggests a can-do independent spirit in a market awash in gurus and hucksters preaching our dependence on them). The decades-old self-help industry is, in short, a plump, inviting target for a sharp takedown, detailing its origins, follies and suspect claims. Unfortunately, Steve Salerno's SHAM, which draws its title from a rather ponderous author-coined acronym for "Self-Help and Motivation," is not that book. More accurately, it is perhaps a third of that book, since Salerno, a former business reporter, is fixated on the notion that, as his sensational title suggests, self-help gurus rarely deliver on their claims to be healers of the wounded American body and soul. This is not a trivial charge, of course, but, intellectually speaking, it's the least interesting feature of the sprawling self-help industry. All sorts of things in contemporary culture don't work yet continue to draw millions of people, usually on a repeat-business model: fad diets, pyramid investment schemes, faith healing, the two-party system. P.T. Barnum's immortal dictum about the regular birthing of suckers is a keystone of the American consumer economy. Nevertheless, Salerno presses a single-minded brief against the practitioners of self-help on the grounds that they consistently fail to deliver the goods promised in their come-ons. To seal the indictment, he describes his own moment of clarity, which came to him during a stretch in a satellite wing of the self-help business, as an editor on a Men's Health-affiliated books program at Rodale Press, a "vast better-living empire." After he had looked over a few marketing surveys, Salerno reports, "one piece of information . . . stood out above all others and guided our entire approach: The most likely customer for a book on any given topic was someone who had bought a similar book within the preceding eighteen months ." This was all well and good, Salerno reasoned, for Rodale's regular gardening or Civil War titles, but when it came to self-help, a different standard should apply: "Many of our books proposed to solve, or at least ameliorate, a problem. If what we sold worked, one would expect lives to improve," and repeat business to evaporate. Instead, Salerno writes, "failure and stagnation are central to all of SHAM. The self-help guru has a compelling interest in not helping people."Armed with this obvious truth, Salerno provides a series of uncomplimentary thumbnail profiles of self-help leaders, from the Oprah-branded disciplinarian "Dr. Phil" McGraw to the corporate cheerleader Anthony Robbins. Often enough, he yields damning, or simply entertaining, background material -- for example, radio scold Dr. Laura Schlessinger's penchant for poaching the mates of others (in addition to her well-documented dalliance as a nude photo subject). But just as often, Salerno overreaches and recites material that is either simply irrelevant -- as when he tells us that investment guru Suze Orman had "a serious speech impediment" as a child -- or nobody's business. Referring to Orman, for instance, he writes that she "has never married -- a bit odd for a woman who spends so much time talking about balance in life."As SHAM continues on its determined path, it becomes clear that the book is anything but "the first serious expos?" of the self-help movement touted in the publicity materials. It is, rather, a kitchen-sink broadside, in which Salerno pins all sorts of evils on the industry. For example, in the movement's well-documented rhetoric of guiltlessness, he sees the very foundations of Western morality giving way: "We have the Recovery movement to thank for the fact that nowadays the people who criticize wrongdoers are sinners, while the wrongdoers themselves are simply 'human'. . . . Recovery's bedrock assumption -- that you're not evil or venal, you're simply exhibiting symptoms -- lays the groundwork for an amoral view of life. It explains why today's society goes to extraordinary semantic lengths to separate the criminal from the crime."This is all a bit much -- especially since the only proof Salerno offers for this grandiose claim concerns a sensational legal defense mounted by Rosemary Heinen, an embezzling executive at Starbucks, who said she suffered from "impulse control disorder." Salerno also fails to mention that the defense didn't work: Heinen was convicted and sentenced to a four-year jail term in 2002. That is the problem with much of SHAM : It is less a considered argument about the self-help world's many excesses than a long train of can-you-believe-this-crap anecdotage. The outrages are all real enough, but the reader wants some sustained explanation of why they keep occurring, and why this country, an alleged capital of Emersonian self-reliance, churns them out in such enormous quantities. But that would mean extending Salerno's argument beyond its self-imposed historical limitations. According to him, self-help kicks off with the emergence of Alcoholics Anonymous in 1935 and gathers real momentum with the publication of the transactional analysis bible I'm OK, You're OK in 1967, whereas most serious students of this strain of therapeutic belief, such as the cultural historian Donald Meyer, locate its roots in the late-19th century New Thought movement. Explaining the deeper sources of self-help's appeal also would involve hazarding some argument about the nature of the American self to begin with -- as Christopher Lasch did in his masterful critique of human potential (as it was then called) in The Culture of Narcissism (1978). That book, together with Meyer's landmark 1965 study, The Positive Thinkers , would be the best place to start reckoning with the bigger questions raised by our national romance with self-help. SHAM misses a great opportunity to follow up on those questions, and that is, indeed, a shame. ? Chris Lehmann is an editor at Congressional Quarterly. From checker at panix.com Sun Jul 10 16:01:32 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 12:01:32 -0400 (EDT) Subject: [Paleopsych] SW: On Mental Disorders in the US 1990-2003 Message-ID: Public Health: On Mental Disorders in the US 1990-2003 http://scienceweek.com/2005/sw050715-6.htm The following points are made by R.C. Kessler et al (New Engl. J. Med. 2005 352:2515): 1) In the 1980s, the Epidemiologic Catchment Area (ECA) Study found that 29.4 percent of the adults interviewed had had a mental disorder at some time in the 12 months before the interview (referred to as a "12-month mental disorder"), according to the criteria of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders, third edition (DSM-III).[3] A fifth of those with a 12-month disorder received treatment. Half of all who received treatment did not meet the criteria for a 12-month disorder according to the ECA Study or the DSM-III. A decade later, the National Comorbidity Survey (NCS) found that 30.5 percent of people 15 to 54 years of age had conditions that met the criteria for a 12-month mental disorder according to the criteria of the DSM-III, revised (DSM-III-R).[4] A fourth of these patients received treatment. Roughly half of all who received treatment did not meet the criteria for a 12-month mental disorder according to the NCS or the DSM-III-R. 2) The results of the ECA study and the NCS are no longer valid owing to changes in the delivery of mental health care. The Substance Abuse and Mental Health Services Administration found that annual visits to mental health specialists (i.e., psychiatrists and psychologists) increased by 50 percent between 1992 and 2000.[5] The National Ambulatory Medical Care Survey found that the number of people receiving treatment for depression tripled between 1987 and 1997. The Robert Wood Johnson Foundation Community Tracking Survey found that the number of people with a serious mental illness who were treated by a specialist increased by 20 percent between 1997 and 2001. 3) The authors examined trends in the prevalence and rate of treatment of mental disorders among people 18 to 54 years of age during roughly the past decade. The authors conclude: Despite an increase in the rate of treatment, most patients with a mental disorder did not receive treatment.[1.2] References (abridged): 1. Department of Health and Human Services. Mental health: a report of the Surgeon General. Bethesda, Md.: National Institute of Mental Health, 1999 2. President's New Freedom Commission on Mental Health. Achieving the promise: transforming mental health care in America 3. Robins LN, Regier DA, eds. Psychiatric disorders in America: The Epidemiologic Catchment Area Study. New York: Free Press, 1991 4. Kessler RC, McGonagle KA, Zhao S, et al. Lifetime and 12-month prevalence of DSM-III-R psychiatric disorders in the United States: results from the National Comorbidity Survey. Arch Gen Psychiatry 1994;51:8-19 5. Manderscheid RW, Atay JE, Hernandez-Cartagana MR, et al. Highlights of organized mental health services in 1998 and major national and state trends. In: Manderscheid RW, Henderson MJ, eds. Mental health, United States, 2000. Washington, D.C.: Government Printing Office, 2001:135-71 New Engl. J. Med. http://www.nejm.org From checker at panix.com Sun Jul 10 16:01:41 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 12:01:41 -0400 (EDT) Subject: [Paleopsych] SW: On Disease in Marathon Runners Message-ID: Medical Biology: On Disease in Marathon Runners http://scienceweek.com/2005/sw050715-5.htm The following points are made by B.D. Levine and P.D. Thompson (New Engl. J. Med. 2005 352:1516): 1) As traditional as the marathon itself is the use of the event for research and of its runners as research subjects. In the second year of its existence, two physicians, Harold Williams and Horace D. Arnold, examined urine specimens from some of the runners and noted urinary casts and proteinuria -- findings that would later be known as "athletic pseudonephritis".[1] Clarence DeMar, a legendary Boston runner, won the marathon an incredible seven times. His total would probably have been higher had he not been advised against competing by a physician who detected what was undoubtedly an innocent flow murmur produced by DeMar's augmented cardiac stroke volume. DeMar was also a subject in studies performed by the noted Boston cardiologist Paul Dudley White, who had a lifelong interest in the marathon and had studied the heart rate of Boston participants in the 1915 and 1916 races. When DeMar died of colon cancer in 1958, White arranged for an autopsy on the already embalmed body. A report in 1961 [2] presented results from both White's earlier studies of DeMar and the autopsy, which showed that the diameter of DeMar's coronary arteries was approximately two to three times that in normal adults. White, a great advocate of exercise who often rode his bicycle to work, was a big fan of the marathon and, ironically, first recognized his own heart disease because of angina that developed as he jogged over to the race venue to watch David McKenzie of New Zealand win the 1967 race. 2) Research interest in marathon participants during the first decades of the 20th century was driven by concern for their health. Little was known about cardiac adaptations to endurance exercise, and what was known was determined by auscultation and the use of the "trained finger" for palpation and percussion. Hallmarks of an athlete's heart such as bradycardia, cardiac enlargement, and innocent flow murmurs, were, in the view of the clinicians of the day, possible signs of pathologic heart block, cardiomyopathy, and valvular obstruction. It was not until 1942 that White used electrocardiography to record markedly slow, but normal, sinus bradycardia in athletes. According to Tom Derderian, author of a history of the Boston Marathon,[3] marathoners were the test pilots and astronauts of their time, running where none had run before -- and possibly risking their health in the process. Concerns about the health of athletes ultimately abated with the growing understanding that these cardiac changes were normal physiological adaptations and that physical activity conferred multiple health benefits. 3) In actuality, marathoning is a reasonably safe sport, with less than one death per 50,000 participants. Deaths that occur during less extreme physical activity and in previously healthy persons are usually caused by cardiac disease -- predominantly, congenital problems such as hypertrophic cardiomyopathy or coronary anomalies in young athletes and atherosclerotic coronary artery disease in persons older than 35 years of age. 4) Nontraumatic causes of death among marathoners and ultramarathoners, military recruits, and persons who labor in hot and humid conditions are more varied; historically, they have included heat stroke and exertional rhabdomyolysis. These conditions are mitigated by adequate hydration, and preventive efforts have led to widespread recommendations for aggressive fluid consumption during endurance events such as marathons. These recommendations stemmed from the argument that because thirst may not be a precise indicator of the state of the plasma volume, fixed (and large) quantities of fluids should be consumed by athletes during endurance events, regardless of fitness level, body size, and known amount or composition of sweat loss. 5) However in 1981, during the 90-km Comrades Ultramarathon in South Africa, two cases of hyponatremia developed; they were later reported by Timothy Noakes in a runners' magazine called South African Runner. Although there has been vigorous debate about the relative importance of fluid overload as compared with sodium loss due to sweating in the development of hyponatremia in runners, an extensive literature has accumulated over the past 20 years documenting that the primary cause is water intake in excess of sodium loss. The relative importance of water loss and sodium loss depends on the type and duration of the race, weather conditions, and the rates of these losses (as well as the rate of replacement of water and sodium), which may vary widely among athletes.[3-5] 1. Williams H, Arnold HD. The effects of violent and prolonged muscular exercise upon the heart. Phila Med J 1899;3:1233-9 2. Currens JH, White PD. Half a century of running: clinical, physiologic and autopsy findings in the case of Clarence DeMar ("Mr. Marathon"). Nord Hyg Tidskr 1961;265:988-993 3. Derderian T. The Boston Marathon: the first century of the world's premier running event. Champaign, Ill.: Human Kinetics, 1996 4. Casa D. Proper hydration for distance running -- identifying individual fluid needs. Indianapolis: USA Track & Field, 2003. 5. Maughan RJ, Burke LM, Coyle EF, eds. Food, nutrition and sports performance II: the International Olympic Committee consensus on sports nutrition. New York: Taylor & Francis Group/Routledge, 2004 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: ANTHROPOLOGY: ENDURANCE RUNNING AND HUMAN EVOLUTION The following points are made by D.M. Bramble and D.E. Lieberman (Nature 2004 432:345): 1) Most research on the evolution of human locomotion has focused on walking. There are a few indications that the earliest-known hominids were bipeds[1,2], and there is abundant fossil evidence that australopithecines habitually walked by at least 4.4 million years (Myr) ago[3,4]. Many researchers interpret the evolution of an essentially modern human-like body shape, first apparent in early Homo erectus, as evidence for improved walking performance in more open habitats that came at the expense of retained adaptations in the australopithecine postcranium for arboreal locomotion [5]. 2) Although the biomechanics of running, the other human gait, is well studied, only a few researchers have considered whether running was a mode of locomotion that influenced human evolution. This lack of attention is largely because humans are mediocre runners in several respects. Even elite human sprinters are comparatively slow, capable of sustaining maximum speeds of only 10.2 m/s for less than 15 s. In contrast, mammalian cursorial specialists such as horses, greyhounds, and pronghorn antelopes can maintain maximum galloping speeds of 15-20 m/s for several minutes. Moreover, running is more costly for humans than for most mammals, demanding roughly twice as much metabolic energy per distance travelled than is typical for a mammal of equal body mass. Finally, human runners are less manoeuvrable and lack many structural modifications characteristic of most quadrupedal cursors such as elongate digitigrade feet and short proximal limb segments. 3) However, although humans are comparatively poor sprinters, they also engage in a different type of running, endurance running (ER), defined as running many kilometers over extended time periods using aerobic metabolism. Although not extensively studied in non-humans, ER is unique to humans among primates, and uncommon among quadrupedal mammals other than social carnivores (such as dogs and hyenas) and migratory ungulates (such as wildebeest and horses). 4) In summary: Striding bipedalism is a key derived behavior of hominids that possibly originated soon after the divergence of the chimpanzee and human lineages. Although bipedal gaits include walking and running, running is generally considered to have played no major role in human evolution because humans, like apes, are poor sprinters compared to most quadrupeds. The authors assess how well humans perform at sustained long-distance running, and review the physiological and anatomical bases of endurance running capabilities in humans and other mammals. Judged by several criteria, humans perform remarkably well at endurance running, thanks to a diverse array of features, many of which leave traces in the skeleton. The fossil evidence of these features suggests that endurance running is a derived capability of the genus Homo, originating about 2 million years ago, and may have been instrumental in the evolution of the human body form. References (abridged): 1. Haile-Selassie, Y. Late Miocene hominids from the Middle Awash, Ethiopia. Nature 412, 178-181 (2001) 2. Galik, Y. et al. External and internal morphology of the BAR 1002'00 Orrorin tugenensis femur. Science 305, 1450-1453 (2004) 3. Ward, C. V. Interpreting the posture and locomotion of Australopithecus afarensis: where do we stand? Yb. Physical Anthropol. 35, 185-215 (2002) 4. Aiello, L. & Dean, M. C. An Introduction to Human Evolutionary Anatomy (Academic, London, 1990) 5. Rose, M. D. in Origine(s) de la Bip?die chez les Hominides (eds Coppens, Y. & Senut, B.) 37-49 (CNRS, Paris, 1991) Nature http://www.nature.com/nature -------------------------------- Related Material: MEDICAL BIOLOGY: DOPING AND ATHLETIC PERFORMANCE The following points are made by Timothy D. Noakes (New Engl. J. Med. 2004 351:847): 1) Is it possible for the "natural" athlete who competes without chemical assistance to achieve record-breaking performances in sports requiring strength, power, speed, or endurance? Because doping tests are infrequently positive in international sports, it has been widely believed that the answer is yes -- and that few athletes competing in major sporting events, including the Olympic Games and the Tour de France, use performance-enhancing drugs. But multiple sources of evidence, including personal testimony(1,2) and an ever-increasing incidence of doping scandals, suggest the opposite: that widespread use of performance-enhancing drugs has fundamentally distorted the upper range of human athletic performance.(1,3-5) Unfortunately, a global code of silence has kept the problem hidden from public view.(4,5) 2) Drugs have been in sports for a long time. In the earliest modern Olympic Games, the drugs of choice included strychnine, heroin, cocaine, and morphine,(4) which were probably more harmful than helpful. The first "effective" performance-enhancing drugs, the amphetamines, which were used widely by soldiers in the Second World War, crossed over into sports in the early 1950s.(4) These drugs -- nicknamed "la bomba" by Italian cyclists and "atoom" by Dutch cyclists -- minimize the uncomfortable sensations of fatigue during exercise. By setting a safe upper limit to the body's performance at peak exertion, these unpleasant sensations prevent bodily harm. The artificial manipulation of this limit by drugs places athletes at risk for uncontrolled overexertion. 3) The first cases of fatal heatstroke in athletes using atoom were reported in the 1960s. In the 1967 Tour de France, elite British cyclist Tom Simpson died on the steep ascent of Mont Ventoux, allegedly because of amphetamine abuse. The precise extent to which amphetamines enhance athletic performance is unknown, since, as with all performance-enhancing drugs, there are few modern studies quantifying their effects. The convenient absence of such information represents further evidence of a hidden problem. A popular opinion is that la bomba can turn the usual Tour de France domestique, or support rider, into a stage winner. 4) Since amphetamines must be present in the body to be effective, the sole method of avoiding the detection of their use during competition is to substitute a clean urine sample for the doped specimen. A multitude of innovative techniques have been developed to accomplish this swap.(2) Cortisone, a potent but legal performance-enhancing drug used to dampen inflammation, also reduces the discomfort of heavy daily training and competition and lifts the mood. It is also widely abused by professional cyclists.(2) 5) Testosterone propionate (Testoviron), the prototype of the anabolic steroids, the second major group of potent performance-enhancing drugs, was synthesized in 1936 and appeared in sport sometime after the 1948 Olympic Games. The subsequent synthesis of methandrostenolone (Dianabol) in the USin 1958 and oral chlordehydromethyltestosterone (Turinabol) in East Germany after 1966 marked the beginning of the "virilization" of modern sport.(4) By increasing muscle size, these drugs increase strength, power, and sprinting speed; they also alter mood and speed the rate of recovery, permitting more intensive training and hence superior training adaptation. For maximal effect, anabolic steroids are used in combination with other hormones that have similar activity, including insulin, growth hormone, and insulin-like growth factor. They have multiple side effects, some of which are serious, including premature death. References: 1. Reiterer W. Positive -- an Australian Olympian reveals the inside story of drugs and sport. Sydney: Pan Macmillan Australia, 2000 2. Voet W. Breaking the chain: drugs and cycling; the true story. Fotheringham W, trans. London: Yellow Jersey, 2001 3. Franke WW, Berendonk B. Hormonal doping and androgenization of athletes: a secret program of the German Democratic Republic government. Clin Chem 1997;43:1262-1279 4. Hoberman JM. Mortal engines: the science of performance and the dehumanization of sport. New York: Free Press, 1992 5. Hoberman JM. How drug testing fails: the politics of doping control. In: Wilson W, Derse E, eds. Doping in elite sport: the politics of drugs in the Olympic movement. Champaign, Ill.: Human Kinetics, 2001:241-70 New Engl. J. Med. http://www.nejm.org From shovland at mindspring.com Sun Jul 10 19:16:12 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sun, 10 Jul 2005 21:16:12 +0200 (GMT+02:00) Subject: [Paleopsych] Iraq war Message-ID: <19428674.1121022972832.JavaMail.root@wamui-thinleaf.atl.sa.earthlink.net> Killing civilians may make perfect sense. After all, the politicians can't keep the war going if they know we are against them. Even if our elections are a sham, we still do them, and there is always the possibility of wild cards. -----Original Message----- From: Michael Christopher Sent: Jul 9, 2005 8:41 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] Iraq war >>If these bombers are Iraqi nationalists, there is hope that our departure from Iraq will cool the conflict. The thought of regional terrorists in the Middle East is indeed terrifying.<< --It's obviously a mixture, which brings up the question: will foreign terrorists transform Iraqi nationalists who would otherwise fight conventionally into terrorists who believe killing civilians is legitimate warfare? In the current situation, anyone who proves himself to be capable of killing US troops is likely to gain respect, but every bombing that targets Iraqis will most likely decrease respect for foreign terrorists. Has the number of Iraqi suicide bombers (targeting civilians rather than soldiers) increased since Zarqawi set up shop? Conventional fighters who target soldiers would likely taper off their efforts when US troops leave and Iraqis take up the job of security, allowing Sunnis and borderline resistance members to take part in government without losing face (many would feel that cooperating with a US-sponsored Iraqi government would be humiliating and an admission of defeat... removing US troops would remove that motive as well). But foreign terrorists aren't going to leave unless Iraqis as a people stand against them. That becomes more likely every time Iraqis are targeted by terrorists, so the logical strategy for the US is to stay until Iraqis are known to be solidly against the presence of foreign terrorists and Iraqi nationalists begin to separate themselves from foreign terrorists by denouncing attacks on civilians. Any Iraqi resistance fighter who feels it's dishonorable to kill civilians is going to be less and less comfortable being associated with Zarqawi's terrorists, more likely to develop rifts with foreign organizers and more likely to leak information leading to the capture of Zarqawi and other terrorist leaders. That pretty much sets up a series of events leading naturally to the war's end. Along with a global denunciation of terrorism by mainstream Muslims and renewed focus on the Israeli-Palestinian partition process, the general trend should be positive, as long as the US and Iran don't provoke one another into another war in the meantime. Bush will watch to see if Iranians stand up to their hardliners or bide their time thinking the US will intervene. With nobody in either party willing to impose a draft, an invasion of Iran is unlikely, leaving air strikes by the US or Israel as the only option. Would an air strike against Iran's government targets or nuclear facilities produce a larger pool of global terrorists seeking nuclear or biological weapons? Or would Iranian moderates take over immediately? Or both? One assumption that should be eliminated is that there is some fixed number of terrorists, and that it's a good thing to draw them all into Iraq to fight them on their own ground. That logic makes sense at first glance, but it's based on an assumption that's not safe to make. But regardless of how one analyzes the overall situation, the immediate solution to Iraq is for the US to stay until there is good reason to believe Iraqi moderates can establish security and prevent foreign terrorists from gaining influence and using Iraq as a training ground. Once Zarqawi is captured or killed and Iraqi resistance members begin shunning foreign terrorists, it will be a lot easier for the US to leave, and attention can be focused on Iran. I don't really trust the current administration to handle it gracefully, but we have what we have. Hopefully enough systems thinkers will focus on geopolitics to provide a counterweight to the gung-ho mentality that will want to rely on forceful moves that may backfire in the long term. We've tended to rely on those moves in the past, doing whatever seemed strongest in the short term, like a beginner chess player who takes every piece that's offered. The end result is messy. Strong moves made hastily can add up to a weak foundation. Michael ____________________________________________________ Sell on Yahoo! Auctions ? no fees. Bid on great items. http://auctions.yahoo.com/ _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From christian.rauh at uconn.edu Mon Jul 11 00:19:27 2005 From: christian.rauh at uconn.edu (Christian Rauh) Date: Sun, 10 Jul 2005 20:19:27 -0400 Subject: [Paleopsych] Iraq war In-Reply-To: <19428674.1121022972832.JavaMail.root@wamui-thinleaf.atl.sa.earthlink.net> References: <19428674.1121022972832.JavaMail.root@wamui-thinleaf.atl.sa.earthlink.net> Message-ID: <42D1BB0F.6070908@uconn.edu> I was trying not to write in this thread but couldn't help myself. Following Steve's comment "killing civilians may make perfect sense". I want to add that killing civilians is not an "evil" strategy that resistance fighthers would not use hadn't they been infiltrated by terrorists. The strategy makes sense. Terrorism is a common strategy when confronting a much larger and powerful adversary in war. It has always been used, the change today is the suicide component, aparently. What makes no sense is the Iraqi resistance trying to go against the US military or US equiped Iraqi forces in "conventional" fighting. The resistance would be crushed easily. If civilians are the only targets they can reach then that's what they will hit. Of course, the treshold for that level of warfare is different for different groups but the rationale is always the same. In the past many terrorist were known as freedom fighters, ops, they still are called that. Yours, Christian shovland at mindspring.com wrote: > Killing civilians may make perfect sense. > > After all, the politicians can't keep the war > going if they know we are against them. > > Even if our elections are a sham, we still > do them, and there is always the possibility > of wild cards. > > > -----Original Message----- > From: Michael Christopher > Sent: Jul 9, 2005 8:41 PM > To: paleopsych at paleopsych.org > Subject: [Paleopsych] Iraq war > > > >>>If these bombers are Iraqi nationalists, there is > > hope that our departure from Iraq will cool the > conflict. The thought of regional terrorists in the > Middle East is indeed terrifying.<< > > --It's obviously a mixture, which brings up the > question: will foreign terrorists transform Iraqi > nationalists who would otherwise fight conventionally > into terrorists who believe killing civilians is > legitimate warfare? In the current situation, anyone > who proves himself to be capable of killing US troops > is likely to gain respect, but every bombing that > targets Iraqis will most likely decrease respect for > foreign terrorists. > > Has the number of Iraqi suicide bombers (targeting > civilians rather than soldiers) increased since > Zarqawi set up shop? Conventional fighters who target > soldiers would likely taper off their efforts when US > troops leave and Iraqis take up the job of security, > allowing Sunnis and borderline resistance members to > take part in government without losing face (many > would feel that cooperating with a US-sponsored Iraqi > government would be humiliating and an admission of > defeat... removing US troops would remove that motive > as well). But foreign terrorists aren't going to leave > unless Iraqis as a people stand against them. That > becomes more likely every time Iraqis are targeted by > terrorists, so the logical strategy for the US is to > stay until Iraqis are known to be solidly against the > presence of foreign terrorists and Iraqi nationalists > begin to separate themselves from foreign terrorists > by denouncing attacks on civilians. Any Iraqi > resistance fighter who feels it's dishonorable to kill > civilians is going to be less and less comfortable > being associated with Zarqawi's terrorists, more > likely to develop rifts with foreign organizers and > more likely to leak information leading to the capture > of Zarqawi and other terrorist leaders. > > That pretty much sets up a series of events leading > naturally to the war's end. Along with a global > denunciation of terrorism by mainstream Muslims and > renewed focus on the Israeli-Palestinian partition > process, the general trend should be positive, as long > as the US and Iran don't provoke one another into > another war in the meantime. Bush will watch to see if > Iranians stand up to their hardliners or bide their > time thinking the US will intervene. With nobody in > either party willing to impose a draft, an invasion of > Iran is unlikely, leaving air strikes by the US or > Israel as the only option. Would an air strike against > Iran's government targets or nuclear facilities > produce a larger pool of global terrorists seeking > nuclear or biological weapons? Or would Iranian > moderates take over immediately? Or both? > > One assumption that should be eliminated is that there > is some fixed number of terrorists, and that it's a > good thing to draw them all into Iraq to fight them on > their own ground. That logic makes sense at first > glance, but it's based on an assumption that's not > safe to make. But regardless of how one analyzes the > overall situation, the immediate solution to Iraq is > for the US to stay until there is good reason to > believe Iraqi moderates can establish security and > prevent foreign terrorists from gaining influence and > using Iraq as a training ground. Once Zarqawi is > captured or killed and Iraqi resistance members begin > shunning foreign terrorists, it will be a lot easier > for the US to leave, and attention can be focused on > Iran. I don't really trust the current administration > to handle it gracefully, but we have what we have. > Hopefully enough systems thinkers will focus on > geopolitics to provide a counterweight to the gung-ho > mentality that will want to rely on forceful moves > that may backfire in the long term. We've tended to > rely on those moves in the past, doing whatever seemed > strongest in the short term, like a beginner chess > player who takes every piece that's offered. The end > result is messy. Strong moves made hastily can add up > to a weak foundation. > > Michael > > > > ____________________________________________________ > Sell on Yahoo! Auctions ? no fees. Bid on great items. > http://auctions.yahoo.com/ > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > -- ????????????????????????????????????????????????????????????????????? ~ I G N O R A N C E ~ The trouble with ignorance is precisely that if a person lacks virtue and knowledge, he's perfectly satisfied with the way he is. If a person isn't aware of a lack, he can not desire the thing which he isn't aware of lacking. Symposium (204a), Plato _____________________________________________________________________ ????????????????????????????????????????????????????????????????????? From anonymous_animus at yahoo.com Mon Jul 11 22:57:55 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Mon, 11 Jul 2005 15:57:55 -0700 (PDT) Subject: [Paleopsych] suicide bombings In-Reply-To: <200507111800.j6BI0fR22811@tick.javien.com> Message-ID: <20050711225755.16779.qmail@web30813.mail.mud.yahoo.com> Christian says: >>Following Steve's comment "killing civilians may make perfect sense". I want to add that killing civilians is not an "evil" strategy that resistance fighthers would not use hadn't they been infiltrated by terrorists. The strategy makes sense.<< --I think the major reason suicide bombings are used is that they result in high levels of media coverage, regardless of whether the tactic "works" in any strategic context. It works for groups that want increased status in pro-terrorist circles, it gets them fame and says, "We are a force to be reckoned with". But that's also a risk, if an attack is too outrageous and results in the terrorists being labeled murderers rather than martyrs in the Islamic and Arabic press. Dishonor is worse than death to a suicide bomber, and the best way to marginalize terrosists is to dishonor them, to stain their reputation in the eyes of the audience they seek to influence. Making them look ineffective is one way to do that, referring to them as apostates and murderers rather than martyrs is another. It's safe to say that, at least in the eyes of the victims, suicide bombings are "evil" if the word has any meaning. Most soldiers are taught to demonize and dehumanize their enemy in order to kill without guilt-induced paralysis, but those who deliberately attack civilians are going a level beyond, and it should be noted that "what works" and "what is permissible in a civilized world" are two different things. The message must be sent that terrorism fails, backfires, draws shame upon the groups that rely on it as a tactic. It should be equated with child molestation, not with the courage of conventional soldiers up against a greater force. Ideally, nonviolent resistance groups should be given opportunities to be seen as effective and powerful at the same time, increasing their status. The message should be, "Reject terrorism and repel an occupying government by appealing to the conscience of its people, rather than their fear". This makes pragmatic sense -- when Americans or the British are afraid, they tend to want to feel powerful, and that means not giving in, not backing down. But both groups have high levels of compassion and would easily support a Palestinian nonviolence movement, or an Iraqi movement to end occupation through nonviolent resistance. If we saw Israeli tanks rolling over Palestinians on a daily basis, we'd all be demanding an end to the occupation, divesting from Israel and denouncing Israel's leadership. But if suicide bombings are on the front page, nobody will even notice if a tank rolls over a nonviolent protester. Regarding the "terrorists are driven to do it" argument that comes up pretty often, it possible to explain the behavior of anyone, from an Israeli soldier to a Palestinian suicide bomber to a serial killer or child molester in any nation, in terms of cause and effect -- as long as one is consistent (i.e. all behavior is caused by something, regardless of the perpetrator). But, in a paradox, one side's basest behavior is often framed as a product of another side's provokations, while the other side's behavior is assumed to be uncaused and therefore inexcusable. "Palestinian suicide bombers are driven to do it... Israelis are the real terrorists" is a common example. More consistent would be to say "Members of both sides are driven to do what they do". Palestinian suicide bombers and abusive Israeli police (as opposed to conventional Palestinian fighters and ordinary Israeli police) are both driven to do what they do, and both should be marginalized so that their actions gain them no status or respect. Whatever the cause of suicide bombings, the effect should be the total and unambiguous rejection of terrorist tactics by every nation on earth. The alternative, sending the message that "terrorism works" is intolerable. The consequences of sending that message would be equivalent to rewarding urban gangs by withdrawing police patrols from their "turf", or excusing police brutality on the grounds that cops "have bad days and are entitled to let off a little steam". >>If civilians are the only targets they can reach then that's what they will hit.<< --So here's a catch-22. Should Israel and the US make their soldiers easier to kill in order to discourage attacks against civilians, or should they leave, with terrorist groups getting a corresponding increase in status for having repelled the occupiers? If terrorism succeeds in Spain AND Britain, every group that wants *anything* will consider it an effective option. If the US withdraws from Iraq and Iraqi moderates are terrorized out of power, the results will also be pretty intolerable. So the question for any military power that finds itself up against decentralized terrorism is "How do we reinforce the message that terrorism fails and is rejected by all civilized people, while rewarding groups that reject terrorism as a tactic." Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From checker at panix.com Tue Jul 12 18:58:29 2005 From: checker at panix.com (Premise Checker) Date: Tue, 12 Jul 2005 14:58:29 -0400 (EDT) Subject: [Paleopsych] CHE Colloquy: The Future of Europe Message-ID: The Future of Europe The Chronicle of Higher Education: Colloquy Transcript http://chronicle.com/colloquy/2005/07/europe/ 5.7.17, at 1 p.m., U.S. Eastern time [Target article appended.] The topic For more than 50 years, the nations of Europe have been creeping toward economic and political unity. Last year, the European Union unveiled a draft constitution that was designed to streamline and harmonize the operations of the various bureaucracies in Brussels. But this spring, voters in France and the Netherlands decisively rejected the constitution in popular referenda. Two weeks later, a summit of European leaders collapsed in acrimony. Now a political theorist at Harvard University is weighing in with an argument for a truly unitary European state. Glyn Morgan believes that if Europe is truly serious about balancing American military and diplomatic power, it should become a single sovereign state with a single military and foreign policy. More generally, he argues that most of the EU's supporters -- and most of its "Euroskeptic" opponents -- must answer fundamental questions about sovereignty, the purpose of the EU, and the costs and benefits of integration. Is his argument persuasive? Would a single military and foreign policy be beneficial, or even possible? What are the prospects for the more immediate goal of European economic integration? The guest Glyn Morgan is an associate professor of government and social studies at Harvard University. He earned a Ph.D. in political science at the University of California at Berkeley in 1998. His forthcoming book is The Idea of a European Superstate: Public Justification and European Integration (Princeton University Press, September). He is now at work on books about terrorism and anti-Americanism. _________________________________________________________________ A transcript of the chat follows. _________________________________________________________________ David Glenn (Moderator): Welcome to the Chronicle's colloquy on European political integration. Many thanks to Glyn Morgan for taking time to be here. _________________________________________________________________ Glyn Morgan: It's very nice to be here. Thank you for hosting me. I know that this is a day, when for many people their thoughts are elsewhere. I have lots of family and friends in London and I hope they are all OK. _________________________________________________________________ Question from David Glenn: The miserable news from London this morning is yet another reminder of the power of (apparently) stateless terrorist cells. Some international-relations theorists have suggested that threats like these can't be analyzed or addressed within a state-centered, realist, balance-of-power framework. How do you reply to that line of argument? In terms of confronting the perpetrators of the Madrid and London attacks, does it really make any difference how Europe is configured? Glyn Morgan: The first few paragraphs of the conclusion to the book came back to me this morning when I turned on the news. I'll repeat them here: Imagine that on September 11 next year, terrorists based somewhere in the Magreb fly hijacked passenger jets into the Westminster parliament, the Reichstag, the Vatican, and the Louvre. These attacks kill thousands. Let it further be imagined that the United States is either preoccupied with China or, in the wake of the recent disasters in Iraq, has lost all appetite for foreign military intervention. After years of complaining about US unilateralism, Europeans now fulminate against US isolationism. It is worth bearing this scenario in mind, because given existing military capabilities, Europes nation-states, acting either singly or jointly, would be unable to conduct anything resembling the operation that the United States conducted to destroy Al Qaeda camps in Afghanistan in October and November of 2001. If terrorists based in camps in the Magrebperhaps protected by a friendly host government--promised to repeat their attacks, there would be little that European powers could -- other than fulminate against US isolationism -- do about it. It is partly in recognition of Europes current military weakness and its one-sided dependence on the United States that a number of European political leaders have said that Europe needs to become a superpower. Most of these political leaders want to see Europe become a superpower without becoming a superstate. Some intellectual proponents of a post-sovereign Europe believe that Europe could become a superpower -- albeit a superpower of a new and different type -- while operating under a radically-decentered form of mixed government. The arguments of the last few chapters have tried to show that when situated in a context of violence, conflict, and wide disparities of power between states, many of the prevailing assumptions about European political integration look rather na?ve. I pretty much stand by that argument. _________________________________________________________________ Question from Pete Mackey, at an educational foundation: My wife and I lived in Ireland for two years, 2000-02. Like people throughout Europe now, especially young people, we traveled widely, thanks to the low airfares and easy border crossings that typify today's EU. Does not the resulting level of interaction, inter-marriage, and cultural and economic exchange this massive flow of people across the EU countries is generating create a level of integration that makes some of the more forcible bureaucratic proposals moot, or at least less about integration and more about consolidating political power in Brussels? Glyn Morgan: This is an interesting point. Clearly, Europeans today travel much more within Europe. Perhaps we are on the verge of seeing something like a European-wide common culture. I would not, however, want to draw too sharp a distinction between the "voluntary" social exchanges you describe and "forcible" political or bureaucratic integration, simply because many of the social exchanges you describe were made possible by regulatory measures adopted in Brussels. _________________________________________________________________ Question from James S. Taylor, Univ. of Aveiro, Portugal: Did the EU err by first moving forward economically with the Euro, instead of advancing politically with the Constitution? Glyn Morgan: Yes. The Euro was established as much for political as for economic reasons. I think Europe ought to have proceeded much further politically and socially, before setting up a common curency and monetary policy. I'm generally in favor of European political integration. Yet, if I were to vote today in a British referendum on adopting the Euro, I would vote "No." I still think the long-term success of the Euro remains in doubt. Recent rumblings in Italy are, I suspect, the first of many. _________________________________________________________________ Question from David Glenn: Could you briefly sketch your model of a "democratic model of justification," and explain why such forms of justification are important to the project of European political integration? Glyn Morgan: Im glad you asked that, because the Chronicle article gives a slightly misleading idea of the focus of the book. I do have my own highly opinionated views on how the European Union should be organized. I think -- although admittedly hardly anyone else does -- that Europe should form a unitary sovereign state. But thats really only part of what the book is about. Im a political theorist. And this is a work of political theory -- applied political theory, as I like to think of it. I think political theorists, if they are not to become irrelevant, need to work with important real world issues. Peoples positions on European integration are constructed from facts, values, and arguments. The task of applied political theory is to probe those facts, values, and arguments. The central claim of the book is that the EU is less in need of an institutional fix than a justificatory fix. The project of European integration needs, if it is go any further, a debate about this projects point or purpose. Europe has gone as far as it can as an elite-led project. Europeans need to pose the question: whats the justification for European political integration? Political theorists can help them answer this question. Unfortunately, political theorists have tended to ignore the issue of justification in favor of a debate about Europes (alleged) democratic deficit. The assumption here is that many of Europes popularity problems can be attributed to the fact that Europes institutions are insufficiently democratic. That argument always struck me as silly for at least two reasons. One, Europes institutions are not much less democratic than those of Europes member states. And two, Eurosceptics tend to object not to the democratic failures of Europes institutions but to their very existence. I think political theorists ought to focus on the justificatory challenge posed by Eurosceptics. The book makes a stab at this justificatory challenge. It does so in three stages. Stage one defends what I call a democratic theory of justification. This part of my argument is indebted to the later work of John Rawls. I argue that any fundamental constitutional transformation needs to meet a stringent justificatory hurdle. Not just any argument will do. I argue that a democratic theory of justification must satisfy three requirements: (a) a requirement of publicity; (b) a requirement of accessibility, and (c) a requirement of sufficiency. In working out this part of my argument, Im indebted not just to John Rawls but also to Gerry Gaus, Steve Macedo, Tim Scanlon, Chris Bertram and others who have thought through the idea of public justification. Stage two examines some of the most common arguments for European political integration in the light of this democratic theory of justification. I focus on two particular types of argument that have been put forward by proponents of European political integration -- welfare-based arguments and security-based arguments. In my discussion of welfare-based arguments, I focus primarily on the contrasting sets of arguments put forward by Jv*rgen Habermas and Friedrich Hayek. Habermas represents the social democratic perspective; Hayek represents the classical liberal perspective. Broadly stated, their arguments capture, if at a more sophisticated theoretical level, much of whats at stake in the current debate -- thrown up by the French referendum -- over "social Europe and liberal Europe. Stage three defends a particular type of security-based justification for European political integration. Security, as I understand it, is a complex value that includes at its core a conception of non-dependence. Here my argument is indebted to various contemporary neo-republican political theorists such as Richard Bellamy and Philip Pettit. _________________________________________________________________ Question from David Glenn: Could you flesh out your claim that the EU's institutions are not much less democratic than the member nations' own governments? Many scholars of the EU insist that Brussels suffers from a serious democratic deficit. Alex Warleigh's recent book, for example, argues that the democratic deficit is serious, and that it "arose from the mistaken institutional design and developmental trajectory respectively given to and hoped for the Union by its key founders, who created technocratic structures that were supposed to create a federal state by stealth (and thus in the absence of public engagement, or even knowledge)." Glyn Morgan: The "democratic deficit" debate has, in my opinion, attracted far more attention than it deserves. True, there are undemocratic features of the EU. The same holds true for most "democratic" states--there's nothing terribly democratic about the U.S. Supreme Court or the Electoral College, for instance. The more important point to recognize, I think, is that were the EU to become much more democratic -- say, by empowering the European Parliament -- the EU would immediately become even less legitimate in the eyes of European voters than is now the case. The EU suffers from a justification deficit rather than a democracy deficit. Many ordinary citizens have no clear view of why it ought to exist and why it ought to be granted more powers. _________________________________________________________________ Question from Theodore Kariotis, University of Maryland: When you are a family of 15 and you have major problems in the family, you never add 10 more members in such a dysfunctional family. Do you think that this large expansion was a fatal blow to EU? Glyn Morgan: If you are a family of 15, do you wish to have 10 neighbors who are destitute and killing themselves? Almost certainly not. In other words, I think European Enlargement was a necessary and desirable step. I do, however, worry that the social and economic changes -- both in the former EU15 and in the new member states -- that Enlargement will bring about are changes that many Europeans are neither prepared for nor aware of. To reiterate an earlier point: I think that Europeans are in desperate need of political education. They need to understand why Europe added these new 10 members. _________________________________________________________________ Question from David Glenn: What do you think of the argument recently put forward by T.R. Reid, Jeremy Rifkin, and other analysts, to the effect that Europe already represents a powerful and attractive alternative to the U.S. model of political economy and international relations? Glyn Morgan: I find this line of argument unpersuasive. Reid and Rifkin -- whose books, I hasten to add, I found very provocative and enormously enjoyable -- both believe (to put it crudely) that Europe circa 2004 has reached the promised land. R. and R. seem to think, from what I recall, that Europe (i) has established a more humane model of society than the United States; (ii) is better prepared to meet the challenges of the new century; and (iii) has wisely abandoned military power in favor of alternative more useful forms of power. (Some of these arguments have been advanced more recently by Mark Leonard in his very interesting and quite splendid book Why Europe Will Run the 21st Century.) I wouldnt endorse any of these claims. Europe, at the moment, does not possess a single model of society; and to the extent that it does, that model of society is neither more nor less humane than that of the United States. More specifically, there are aspects of Europe that are more humane -- it does not, for instance, rely upon mass incarceration as a solution to its social problems. And there are aspects of the United States that are more humaneit is generally more open and accommodating to immigrants than Europe, for instance. Efforts to construct a European identity on the basis of differences with the United States -- a proto-European nationalism, as it were - have all the drawbacks of every type of identity politics. It was disappointing to see Jacques Derrida and Jv*rgen Habermas (in their joint letter) go in for this sort of drivel. Europe currently contains a variety of different models of welfare capitalism -- Liberal, Scandinavian, Continental, call them what you will. Europeans disagree among themselves about the merits of these different models. That disagreement cropped up (for largely unwarranted reasons) in the French referendum and led some people to vote against the Constitutional Treaty. Europeans have yet to work out which of these models they want to embrace and how much intra-European variety they will permit. In my view, for what its worth, this disagreement is best played out at the parliamentary level -- both national and European -- and not cemented into place in a Constitutional Treaty. The claim that Europe is uniquely qualified to run the next century is also unpersuasive. Europe, if it continues on its present path, will be uniquely well-qualified for nothing but global irrelevance: economically moribund; demographically geriatric; and internationally impotent. One of the gravest threats to the future of Europe comes from the misguided notion that it can prosper as a non-military civilian power. To rely, as Reid, Rifkin and others suggest, on soft-power, would turn Europe into a superannuated version of Liechtenstein. _________________________________________________________________ David Glenn (Moderator): We're just about halfway through our hour. Please keep those questions coming. . . _________________________________________________________________ Question from Wes Teter: Sitting in Brussels at the moment and having worked in the European Commission, I find it sad that no one speaks about the fact that previous agreements and successes among the member states are not undone simply because a constitution is not ratified. Further, Europe is unified politically, but chooses not to go as far as the single superstate. I agree that the project has not yet reached the public fully. How can the idea of Europe better reach its citizens do you think? How can the idea of Europe better reach the U.S.? Glyn Morgan: I'm less sanguine than you about the claim that "Europe is unified politically." I fear that the threads holding it together have now become rather frayed. As for the claim "the project has not yet reached the public fully" -- I'm not so sure that "reaching" is the problem. One of the great mistakes of the EU Commission is to believe that "to know us is to love us." It may well be that the more the public knows about the EU, the less the public likes it. The answer is to raise and debate fundamental justificatory questions, not simply to conduct more outreach efforts. _________________________________________________________________ Question from David Glenn: In Prospect magazine, Andrew Moravcsik [55]recently argued that this recent constitution-drafting process has been a mistake. Europe already has a perfectly workable constitutional order, he said, and its policies and structure are generally popular. He wrote: "So it was not the substance of the emerging constitutional settlement that triggered opposition. The objectionable aspect was its form: an idealistic constitution. Since the 1970s, lawyers have regarded the treaty of Rome as a de facto constitution. The new document was an unnecessary public relations exercise based on the seemingly intuitive, but in fact peculiar, notion that democratisation and the European ideal could legitimate the EU." What do you think of that line of argument? Glyn Morgan: Complete rubbish. Europe was hardly in a very healthy state prior to the decision to embark upon the Convention and the Constitutional Treaty. It had recently experienced a number of embarrassing referendum defeats; Europe's economy was (and still is) moribund; transatlantic relations were at a postwar low; Europe's existing institutional architecture was ill-suited to handling 10 new members; and Europe's budget was--and still is--a disgrace. The recent national referendums also make it abundantly clear that when given the chance many Europeans attacked the very substance of the EU itself. If this is a model of stability and legitimacy, I would like to know what my good friend would consider a constitutional crisis. _________________________________________________________________ Question from David Glenn: Why do you believe that Euroskeptics, too, need to use a democratic standard of justification when defending their proposals? Can't they plausibly argue that pro-integration advocates should carry a higher burden of proof? ("We're just defending the status quo. It's the fools in Brussels who are trying to change things.") Glyn Morgan: I spend a lot of time in the book discussing the arguments of Eurosceptics -- British Eurosceptics, in particular. I found, to my surprise, that far and away the most cogent statement of the Eurosceptic position is to be found in the writings and speeches of Enoch Powell -- a highly controversial figure in the postwar history of British parliamentary politics. Contemporary Eurosceptics tend to rehash at a less sophisticated level arguments that Powell had made in the 1970s. I think Eurosceptics need to use a democratic standard of justification, because their proposals amount to a fundamental transformation of the existing institutional status quo. Not happy with the current intergovernmental EU -- still less happy with the very minor changes envisaged in the Constitutional Treaty -- many British Eurosceptics seek either the withdrawal of Britain from the Union or the transformation of the Union into a free trade zone. These are far-reaching changes. They need to satisfy the same democratic standard of justification that applies to arguments for a single unitary sovereign Europe or to a multi-level post-sovereign Europe. _________________________________________________________________ Question from Daniele Archibugi, CNR, Rome, Italy: If the demoi of two founding countries of the EU have voted against the Constitution, isn't it the proof that the functionalist approach of Jean Monnet is a more viable strategy than the Federalist project of Altiero Spinelli? Glyn Morgan: Much depends here on what you mean by "a viable strategy." I think the Monnet approach, which worked quite well in Europe's formative stages, is now completely finished. Any further steps along the road of European integration will now have to engage more directly with ordinary citizens. That's why I think the question of justification is so central. I think Europe, if it is to go any further, needs to adopy something akin to the Irish National Forums, which were set up after the Irish rejected the Treaty of Nice. _________________________________________________________________ Question from David Glenn: How do you reply to Richard Sweeney's argument that Europe does not face any serious vulnerabilities, and therefore has no need of a unitary military and security policy? Glyn Morgan: I find this argument unpersuasive for at least three reasons. One, it rests upon a conception of security that is unappealing; two, it fails to comprehend the threats that Europe now confronts; and three, it is na?ve about the military capabilities of highly decentralized, multi-centered polities. Let me say something briefly about just the first of those three points. It is important to recognize that people disagree about the nature of security as a value. Some people certainly do think of security in terms of the absence of immediate current military threats. If thats your conception of security, then Professor Sweeneys probably right to say that Europe does not face any serious vulnerabilities. But thats not my conception of security. And I would hope that thats not the conception of security adopted by Europes political leaders. My argument works with a more expansive conception of security, which includes adequate safeguards against serious harms, even if they are currently of quite a low probability. The book spends quite a lot of time defining and defending adequate safeguards, serious harms, and so forth. _________________________________________________________________ Question from Rich Byrne, Chronicle of Higher Education: If Europe decided to pursue a more federal state, what immediate or near-term steps should the Union propose to ease the way? A mandatory euro? Creation of a standing multi-national army? Glyn Morgan: Let me be clear in saying that I do not think that Europe is likely any time soon to become a more federal state. The twin referendum defeats in France and the Netherlands bring to as close, in my view, a familiar process of integration. This process often involved -- as in the case of the Euro -- adopting a measure for one ostensible reason (economic) while expecting that it would have an additional (political) consequence. The near-term steps I would be in favor of adopting all involve engaging the citizenry in a debate about the desirability of European political integration. _________________________________________________________________ Question from David Glenn: Ivo Daalder of the Brookings Institution has argued that the current EU crisis will almost certainly lead to a slowdown in the EU's expansion plans -- and that that, in turn, will almost certainly slow down the process of political reform in the countries on the EU's periphery. He wrote: "Enlargement has proven to be the most successful strategy of regime change ever devised. While NATO enlargement proved important, for it provided security to countries living in the Soviet and Russian shadow, EU enlargement was absolutely crucial because it provided the basis for solidifying political freedom and enhancing economic prosperity in countries that had known little of either." What do you think of that line of argument? Glyn Morgan: I think that this is absolutely right. I've long been an advocate of European Enlargement -- including the admission of Turkey and the Ukraine. Ivo Daalder is absolutely right to worry about the consequences of the referendum defeats on political reform in places like Romania and Bulgaria. Hopefully, Europeans will come to their senses and re-commit themselves to European Enlargement. Having said that, I think it is important to recognize that European Enlargement is likely to generate far-reaching social and economic changes throughout Europe. We are likely to see a lot more mobility of people, industry, and financial capital than many people will like. _________________________________________________________________ Question from Russell Muirhead, Harvard University: You put much stress on security and self-defense. But are the humanitarian ideals so many Europeans endorse served by the sort of superstate and common military policy you advocate? Glyn Morgan: Yes. You are right to say that I have a security-based justification for European political integration. But I don't think that Europe needs to become more centralized and military potent merely for reasons of self-defense. Europe in its present form is incapable of projecting significant power abroad. That means that it is utterly incapable of autonomously intervening in any major humanitarian catastrophes. Europe has to depend on the United States. It makes little sense for a continent-load of people, more or less the economic equals of Americans, to be this dependent. _________________________________________________________________ Question from David Glenn: What do you see as the weaknesses of political theorists' celebrations of a flexible and "postsovereign" political order in Europe? (In a 1999 book, for example, Joseph Weiler of NYU Law School argued that the EU has transcended traditional forms of sovereignty and federalism because its members accept EU discipline "as an autonomous voluntary act; endlessly renewed on each occasion of subordination, in the discrete areas governed by Europe, which is the aggregate expression of other wills, other political identities, other political communities.") Glyn Morgan: The book sets up the debate over the future of Europe as a debate between three groups: Eurosceptics (who favor a Europe of independent nation-states); Post-sovereignists (who favor either the current intergovernmental EU or a more disaggregated multi-level post-sovereign polity); and European Sovereignists (who favor a unitary European Sovereign state). I chose, for reasons spelled out in the book, to steer clear of the term Federalist, which means very different things to different people and in different countries. I came to spend quite a lot of time on the post-sovereignist position, if only because it is extremely fashionable amongst contemporary legal and political theorists. Post-sovereignists like to tell us that sovereignty -- both internal and external -- is now obsolete. The age of the sovereign state is now over and done with. The great difficulty with this line of argument is that no one seems to have informed the United States government of this fact. The US is a jealous guardian of its sovereignty. This is not an invention of the Bush administration. The Clinton administration was exactly the same. When the most powerful entity in the international system is a sovereign state, then it makes no sense to talk about the death of sovereignty. If post-sovereignists wish to attack sovereignty, they must conduct the argument in normative terms. Here their arguments become altogether much weaker. I think post-sovereignists remain vulnerable to a security-based challenge. If we care about security as non-dependence, then it is not a good idea to adopt a form of governance -- as post-sovereignists would have us do -- that is incapable of balancing the power of the dominant entity in the international system. The post-sovereignist vision of the EU would contribute to the disappearance of Europe as an historical actor. _________________________________________________________________ Question from David Glenn: Would you be willing to offer any predictions about the future of agricultural subsidies in Europe? Glyn Morgan: I think Tony Blair, even if he has recently lost a battle on this front, will win the war. It makes very little sense to spend such a large percentage of the budget on agriculture. Reason , I expect, will ultimately prevail over current Franco-German intransigence. _________________________________________________________________ Question from David Glenn: In the recent volume Democracy and Federalism in the European Union and the United States: Exploring Post-National Governance, Robert Dahl argues that a Europe-wide democracy is unlikely, no matter what sort of "federal" arrangements are used. Europe's population, he argues, is probably too large and too diverse to sustain pan-European democratic structures. What do you think of that line of argument? Glyn Morgan: I don't see any link between the geographical size or scale of a state and its capacity for democratic institutions. I do, however, think that a common language is necessary. That minimal form of shared culture is absolutely indispensable particularly in the advanced industrial democracy that Europe hopes to become. True, there are some multinational exceptions to this rule -- Belgium, Canada, and India, for instance. But Belgium and Canada are forever on the verge of falling apart; and India is not (yet) an advanced industrial society. If Europe is to become politically unified, it needs a common language. I fully expect most Europeans to speak English (plus one other language) within fifty years. _________________________________________________________________ Question from David Glenn: Could nationalism ever operate on a pan-European level? Glyn Morgan: I spend a lot of time in the book on nationalism, for the simple reason that nationalism forms the ideological core of the Eurosceptic hostility to Europe. One of the conclusions I have drawn about nationalism is that this is a noun that always needs a qualifying adjective -- e.g., ethnic nationalism, liberal nationalism, xenophobic nationalism etc. I do some work in the book reconstructing the historical genealogy of nationalism. Ive always found it perplexing that the great modern sociological theories of nationalism -- that of Ernest Gellner, in particular -- have had so very little contact with the history of political thought. Thus Gellner defines nationalism as a principle of political legitimacy -- indeed the modern principle of political legitimacy. But neither he nor his students have had much to say about how this principle of political legitimacy came to either displace or coexist with earlier principles of political legitimacy. Figuring out this puzzle about nationalism took me (via the work of Istvan Hont) back to the writings of Thomas Hobbes, Abbe Sieyes and some of the intellectual founders of the modern nation-state. Hobbes is a particularly important thinker for me, because he envisages a sovereign state, whose members lack any shared cultural or ethnic characteristics. Indeed for Hobbes, the demise of the sovereign state leaves nothing but individuals. Its interesting to ask why Hobbess conception of the state -- a state without a culturally or ethnically defined nation -- was an historical nonstarter. Why, in other words, did the modern state require forms of horizontal and vertical solidarityi.e. solidarity between citizens and their political institutions and solidarity between citizens themselves -- grounded in and sustained by common cultural and/or ethnic characteristics? Much though Id like to think that a modern state could survive in the absence of these solidarities grounded in at least some of these characteristics, Im prepared to concede that no modern state could flourish in their absence. Ideally, these characteristics ought to be as nonexclusive as possible -- a common language, for instance. A European State thus needs, if it is to flourish, a common language. In all probability that language will be English -- which is already the de facto lingua franca of Europe as it is. Is this thin nationalism -- if it even counts as such -- feasible on a pan-European level? Sure. Why not? I dont think that national identities are fixed forever in place. Of course many ethnic and traditional nationalists would disagree. But their arguments -- to the extent that they have any -- are not consistent with what Ive termed a democratic theory of justification. One final point on this topic: I think it is especially important to avoid efforts to identify a much thicker pan-European nationalism grounded in a set of unifying values alleged to distinguish Europe from America. Thats the trouble, I think, with the line of argument taken by Derrida and Habermas in that letter I mentioned earlier. _________________________________________________________________ Glyn Morgan: Thanks everyone for the questions. They were all very challenging. Special thanks to David Glenn- a real gent and a scholar. I now need a beer. _________________________________________________________________ David Glenn (Moderator): Thanks again for taking time to be here on a grim day. _________________________________________________________________ References 54. http://chronicle.com/free/v51/i44/44a01201.htm 55. http://www.prospectmagazine.co.uk/article_details.php?id=6939 --------------- Making the Case for a United States of Europe The Chronicle of Higher Education, 5.7.8 http://chronicle.com/free/v51/i44/44a01201.htm Even as the continental union falters, a Harvard professor says its architects haven't been ambitious enough By DAVID GLENN When voters in France and the Netherlands rejected the proposed constitution of the European Union this spring, the EU's architects initially put on a brave face. The referendum defeats were a painful setback, they said, but with another round of negotiations and some redrafting, a constitution could still be approved within a few years. And in any case, many of the draft constitution's less-controversial provisions could be enacted by the European Union Council -- the gathering of the member nations' ministers. The ship was battered but still seaworthy. Two weeks later, however, it sprang more leaks. At a summit in Brussels, the EU's leaders failed to agree on a new budget. Britain and France squabbled about agricultural policy. Poland and nine other eastern countries attempted to broker a face-saving deal -- even offering to sacrifice some of the subsidies they receive from the EU -- but they failed. The prime minister of Luxembourg told the news media that, no matter what the EU's bureaucrats might say to the contrary, "Europe is in deep crisis." So this might seem like an awkward time to release a book that makes the case for a United States of Europe. But that is precisely what Glyn Morgan, an associate professor of government and social studies at Harvard University, is about to do. In The Idea of a European Superstate: Public Justification and European Integration (Princeton University Press, September), Mr. Morgan argues that the European Union's designers have not been ambitious enough -- that they should go whole hog and create a union that usurps most elements of its member countries' sovereignty. Mr. Morgan grounds his argument on questions of security and foreign policy, using a classical balance-of-power framework. A unipolar world system overwhelmingly dominated by the United States is bound to be unstable, he says. If Europe is truly serious about balancing U.S. power, he continues, it must evolve into a sovereign entity with a single military command and a single foreign policy. The complex, multilayered, "post-sovereign" model promoted by many of the EU's architects simply won't do. Mr. Morgan invites his reader to imagine that foreign-based terrorists someday launch large-scale attacks in Europe, and that the United States cannot offer much help, because its own military is bogged down in China or Iraq or elsewhere. Without a unitary state and a unified military, he writes, "there would be little that European leaders could -- other than fulminate about U.S. isolationism -- do about it." That model is unlikely to find many admirers on either side of the usual Europhile-versus-Euroskeptic divide. The project of unification is generally defended (or attacked) these days on grounds of trade and economics, not war and peace. "Europeans need to confront this brutal choice," the British-born Mr. Morgan says. "Are they going to remain weak and dependent and maintain their decentralized government units, or are they going to try to become players in the world? And if they're going to become players in the world, they need to centralize. I think presenting that brutal choice is profoundly annoying to both sides of the debate." The unpopularity of Mr. Morgan's scheme, however, is largely beside the point. The true purpose of his book is not to inspire a movement for a superstate, but to provide a model of how to argue about European integration. Both the EU's architects and the EU's foes, he says, have evaded certain fundamental questions, and they have failed to justify their projects in terms that the broad European public could conceivably accept. It is no wonder, he says, that the EU's popularity has been slipping. Mr. Morgan hopes that his book will provoke Europhiles and Euroskeptics -- even if they reject his own prescriptions -- to adopt clearer modes of argument. The two sides have generally failed to squarely answer each other's longstanding claims, he says -- and that failure has generated the present impasse. "People are starting to demand answers to fundamental questions," he says. "Why are we doing this? What are the costs and benefits? They're raising basic questions about nationality and sovereignty, and not looking at the EU as a narrow set of economic arrangements." It's Not Just the Economy, Stupid Indeed, economic arrangements are the questions that Mr. Morgan wants to see less of in Europe's constitutional debates. The no votes in France and the Netherlands were driven, at least in part, by popular fears that the draft constitution would promote American-style laissez-faire economics and erode the French and Dutch social safety nets. Many of Britain's Euroskeptics, meanwhile, dislike the constitution for the opposite reason: They believe that it would drag Britain into a sclerotic social democratic economy. Both of those camps, Mr. Morgan says, can point to various provisions in the 349-page draft document that seem to bear out their fears. The constitution is full of enormously detailed language about commercial policy. (Article III, Section 221, proclaims that the EU will pursue its economic objectives "by the action it takes through the Structural Funds (European Agricultural Guidance and Guarantee Fund, Guidance Section; European Social Fund; European Regional Development Fund), the European Investment Bank, and the other existing financial instruments.") Mr. Morgan argues that there is no good reason why these economic questions should be so deeply embedded in the constitution. "The issue of social democracy versus economic free markets should be played out at the democratic level of the European Parliament," he says. "It shouldn't be locked in place at the level of the constitution." "The problem with Europe at the moment," he further argues, "is that they've centralized the wrong things, and they've not centralized the things that they ought to have centralized." Mr. Morgan would prefer to see a much simpler, more streamlined constitution that deals primarily with the question of how a confederation with 25 members (and more in the pipeline) can formulate a coherent foreign and military policy. Other scholars of the European Union, however, insist that it would not be so simple to remove economic language from the constitution. Alex Warleigh, a professor of international politics and public policy at the University of Limerick, in Ireland, says that certain economic questions should be settled clearly now, at the constitutional level, precisely because the public already distrusts the EU bureaucracy. To do otherwise, he says, "would be to repeat the tendency that the EU has always had, which is to obfuscate the real issues of power that are brought into play by European integration. And that would be a mistake because it lends credibility to those who say that Brussels is simply a power grabber." Mr. Warleigh, who is the author of Democracy in the European Union: Theory, Practice, and Reform (Sage, 2003), agrees with Mr. Morgan that the EU's architects should speak much more plainly about the fundamental purposes of European integration. "To present it all as a dry technical operation -- I just don't think that would be believed anymore," he says. "You could get away with that back in the 50s." Nonetheless, Mr. Warleigh says, the detailed commercial language in the constitution is probably inescapable. The Will to Power Mr. Morgan's primary policy idea -- that Europe should become a single sovereign state for foreign-policy purposes -- is not likely to be embraced any time soon. ("That's the great thing about being a political theorist," he says. "If an actual politician stood up and said we should abolish Britain or France in its present form, it would just be political suicide.") Nonetheless, he firmly believes that many of the EU's advocates, with their celebrations of post-sovereignty, multilayered arenas of governance, and "soft power," are deluding themselves. Old-fashioned Hobbesian sovereignty is still what makes the world operate, Mr. Morgan says, and the United States is in fact powerful because it is sovereign in the traditional sense. "Imagine if the United States had to get approval from all 50 governors for a procurement bill for the military," he says. "Under those circumstances the United States would never have been able to fight the Second World War." And yet that is approximately the cumbersome sort of arrangement that the EU's architects have created, Mr. Morgan believes. "Typically, Europeans will then say to me, Well, we don't want to become like America," he continues. "To which I will then say, Fine. But then you should shut up with whining and complaining that America does all these things you don't like. As I said earlier, that's the brutal choice that Europeans face." Richard J. Sweeney, a professor of finance at Georgetown University who recently completed a comparative study of the U.S. and draft European constitutions, says that Mr. Morgan's proposal is "nonsense, and it's a huge threat toward killing off the European Union." In Mr. Sweeney's view, the nascent United States required a coherent foreign and military policy because it was isolated and vulnerable. Modern Europe, he says, faces no comparable vulnerabilities. Shoehorning all 25 EU members into a single foreign policy would be a recipe for divorce. "You can just imagine if a war were declared someday," he says, "and some member countries said, Well, this is intolerable." It would be much better, Mr. Sweeney says, to continue with the present ad hoc arrangement in which, for example, Britain and Poland choose to send troops to Iraq, while France and Germany demur. "Forcing these questions to be asked and answered when they don't have to be," he says, "is just asking for trouble." Desmond Dinan, a professor of international commerce at George Mason University and the author of Europe Recast: A History of European Union (Lynne Rienner, 2004), agrees with Mr. Morgan that there is a tension -- verging on hypocrisy -- when Europhiles celebrate post-sovereignty and at the same time talk about the need to balance American power. But he also says that there is no prospect of the unitary state that Mr. Morgan proposes. "National interests and national identities are just too strong," he says. A Broader Conversation So what will happen next? Mr. Morgan expects that there will be a cooling-off period, and that another constitution will be drafted in a few years, as long as the core institutions stay in place. "The nightmare scenario is that the euro fails," he says. (A few Italian politicians have recently murmured about a return to the lira.) Mr. Morgan is not particularly optimistic about his proposal for a unitary sovereign state, but he is hopeful that this summer's crisis will lead to a broader popular conversation about the European Union's purposes. "All of these fundamental justificatory questions are going to have to be played out in public now," he says. "I think the old elite Europe where the demos was locked out is over now, it's finished." Mr. Dinan, meanwhile, believes that the talk of crisis is overblown. "I saw a headline the other day that said something like 'Europe in Crisis: Schroeder Flies to Luxembourg for Consultation,'" he says. "And I thought, well, compared to Munich in 1938 or Europe in August 1914, this is really not too bad as crises go." It is a measure of the European Union's success, he says, that peace on the continent is taken so much for granted. Not everyone is so comfortable. Mr. Sweeney, of Georgetown, believes that the entire project of integration could quickly unravel, and that that in turn could lead to a serious risk of war. "Could the EU survive if Britain left?" he asks. "I don't think we know the answer to that. The union may have gotten so big that they can't now shrink without starting a process that sort of accidentally ends in disaster." Mr. Warleigh, of Limerick, is more sanguine, and has an elaborate plan for breathing new life into the union. He would like to see a new constitutional convention whose mission would be to draft two potential European constitutions. One would be an intergovernmental model, in which the member nations would have a good deal of power to shape and veto European legislation. The other would be a more unitary model, in which EU leaders would be chosen directly by the European electorate writ large. The two draft constitutions would then be voted on in referenda across Europe. If, say, the intergovernmental model won in most countries, but the Polish public preferred the unitary model, Poland would then hold a second vote about whether or not to remain in the EU. This is just the sort of fundamental debate -- a stark and clear conversation about the EU's structure -- that Mr. Morgan hopes to see. But he realizes that the most difficult arguments are probably yet to come. "The questions of political and military policy touch on questions of national identity," he says. "That's why this debate has now blown up." "The Euroskeptics have always said that sovereignty matters, and I agree with them," Mr. Morgan continues. "I just disagree about how it matters. Sovereignty ought to be located at the European and not at the national level. But to get into that issue, we really have to have a debate about nationalism." From checker at panix.com Tue Jul 12 18:58:43 2005 From: checker at panix.com (Premise Checker) Date: Tue, 12 Jul 2005 14:58:43 -0400 (EDT) Subject: [Paleopsych] Lobster: 'Conspiracy Theories' and Clandestine Politics Message-ID: 'Conspiracy Theories' and Clandestine Politics http://www.lobster-magazine.co.uk/articles/l29consp.htm [Thanks to Laird for this.] by Jeffrey M. Bale From Lobster 29 Very few notions generate as much intellectual resistance, hostility, and derision within academic circles as a belief in the historical importance or efficacy of political conspiracies. Even when this belief is expressed in a very cautious manner, limited to specific and restricted contexts, supported by reliable evidence, and hedged about with all sort of qualifications, it still manages to transcend the boundaries of acceptable discourse and violate unspoken academic taboos. The idea that particular groups of people meet together secretly or in private to plan various courses of action, and that some of these plans actually exert a significant influence on particular historical developments, is typically rejected out of hand and assumed to be the figment of a paranoid imagination. The mere mention of the word 'conspiracy' seems to set off an internal alarm bell which causes scholars to close their minds in order to avoid cognitive dissonance and possible unpleasantness, since the popular image of conspiracy both fundamentally challenges the conception most educated, sophisticated people have about how the world operates and reminds them of the horrible persecutions that absurd and unfounded conspiracy theories have precipitated or sustained in the past. So strong is this prejudice among academics that even when clear evidence of a plot is inadvertently discovered in the course of their own research, they frequently feel compelled, either out of a sense of embarrassment or a desire to defuse anticipated criticism, to preface their account of it by ostentatiously disclaiming a belief in conspiracies. (1) They then often attempt to downplay the significance of the plotting they have uncovered. To do otherwise, that is, to make a serious effort to incorporate the documented activities of conspiratorial groups into their general political or historical analyses, would force them to stretch their mental horizons beyond customary bounds and, not infrequently, delve even further into certain sordid and politically sensitive topics. Most academic researchers clearly prefer to ignore the implications of conspiratorial politics altogether rather than deal directly with such controversial matters. A number of complex cultural and historical factors contribute to this reflexive and unwarranted reaction, but it is perhaps most often the direct result of a simple failure to distinguish between 'conspiracy theories' in the strict sense of the term, which are essentially elaborate fables even though they may well be based upon a kernel of truth, and the activities of actual clandestine and covert political groups, which are a common feature of modern politics. For this and other reasons, serious research into genuine conspiratorial networks has at worst been suppressed, as a rule been discouraged, and at best been looked upon with condescension by the academic community. (2) An entire dimension of political history and contemporary politics has thus been consistently neglected. (3) For decades scholars interested in politics have directed their attention toward explicating and evaluating the merits of various political theories, or toward analyzing the more conventional, formal, and overt aspects of practical politics. Even a cursory examination of standard social science bibliographies reveals that tens of thousands of books and articles have been written about staple subjects such as the structure and functioning of government bureaucracies, voting patterns and electoral results, parliamentary procedures and activities, party organizations and factions, the impact of constitutional provisions or laws, and the like. In marked contrast, only a handful of scholarly publications have been devoted to the general theme of political conspiracies--as opposed to popular anti-conspiracy treatises, which are very numerous, and specific case studies of events in which conspiratorial groups have played some role -- and virtually all of these concern themselves with the deleterious social impact of the 'paranoid style' of thought manifested in classic conspiracy theories rather than the characteristic features of real conspiratorial politics. (4) Only the academic literature dealing with specialized topics like espionage, covert action, political corruption, terrorism, and revolutionary warfare touches upon clandestine and covert political activities on a more or less regular basis, probably because such activities cannot be avoided when dealing with these topics. But the analyses and information contained therein are rarely incorporated into standard works of history and social science, and much of that specialized literature is itself unsatisfactory. Hence there is an obvious need to place the study of conspiratorial politics on a sound theoretical, methodological, and empirical footing, since ignoring the influence of such politics can lead to severe errors of historical interpretation. This situation can only be remedied when a clear-cut analytical distinction has been made between classic conspiracy theories and the more limited conspiratorial activities that are a regular feature of politics. 'Conspiracy theories' share a number of distinguishing characteristics, but in all of them the essential element is a belief in the existence of a 'vast, insidious, preternaturally effective international conspiratorial network designed to perpetrate acts of the most fiendish character', acts which aim to 'undermine and destroy a way of life.' (5) Although this apocalyptic conception is generally regarded nowadays as the fantastic product of a paranoid mindset, in the past it was often accepted as an accurate description of reality by large numbers of people from all social strata, including intellectuals and heads of state. (6) The fact that a belief in sinister, all-powerful conspiratorial forces has not been restricted to small groups of clinical paranoids and mental defectives suggests that it fulfills certain important social functions and psychological needs.(7) First of all, like many other intellectual constructs, conspiracy theories help to make complex patterns of cause-and-effect in human affairs more comprehensible by means of reductionism and oversimplification. Secondly, they purport to identify the underlying source of misery and injustice in the world, thereby accounting for current crises and upheavals and explaining why bad things are happening to good people or vice versa. Thirdly, by personifying that source they paradoxically help people to reaffirm their own potential ability to control the course of future historical developments. After all, if evil conspirators are consciously causing undesirable changes, the implication is that others, perhaps through the adoption of similar techniques, may also consciously intervene to protect a threatened way of life or otherwise alter the historical process. In short, a belief in conspiracy theories helps people to make sense out of a confusing, inhospitable reality, rationalize their present difficulties, and partially assuage their feelings of powerlessness. In this sense, it is no different than any number of religious, social, or political beliefs, and is deserving of the same serious study. The image of conspiracies promoted by conspiracy theorists needs to be further illuminated before it can be contrasted with genuine conspiratorial politics. In the first place, conspiracy theorists consider the alleged conspirators to be Evil incarnate. They are not simply people with differing values or run-of-the-mill political opponents, but inhuman, superhuman, and/or anti-human beings who regularly commit abominable acts and are implacably attempting to subvert and destroy everything that is decent and worth preserving in the existing world. Thus, according to John Robison, the Bavarian Illuminati were formed 'for the express purpose of ROOTING OUT ALL THE RELIGIOUS ESTABLISHMENTS, AND OVERTURNING ALL THE EXISTING GOVERNMENTS IN EUROPE.' (8) This grandiose claim is fairly representative, in the sense that most conspiracy theorists view the world in similarly Manichean and apocalyptic terms. Secondly, conspiracy theorists perceive the conspiratorial group as both monolithic and unerring in the pursuit of its goals. This group is directed from a single conspiratorial centre, acting as a sort of general staff, which plans and coordinates all of its activities down to the last detail. Note, for example, Prince Clemens von Metternich's claim that a 'directing committee' of the radicals from all over Europe had been established in Paris to pursue their insidious plotting against established governments. (9) Given that presumption, it is no accident that many conspiracy theorists refer to 'the Conspiracy' rather than (lower case)conspiracies or conspiratorial factions, since they perceive no internal divisions among the conspirators. Rather, as a group the conspirators are believed to possess an extraordinary degree of internal solidarity, which produces a corresponding degree of counter solidarity vis-a-vis society at large, and indeed it is this very cohesion and singleness of purpose which enables them to effectively execute their plans to destroy existing institutions, seize power, and eliminate all opposition. Thirdly, conspiracy theorists believe that the conspiratorial group is omnipresent, at least within its own sphere of operations. While some conspiracy theories postulate a relatively localized group of conspirators, most depict this group as both international in its spatial dimensions and continuous in its temporal dimensions. '[T]he conspirators planned and carried out evil in the past, they are successfully active in the present, and they will triumph in the future if they are not disturbed in their plans by those with information about their sinister designs.'(10) The conspiratorial group is therefore capable of operating virtually everywhere. As a consequence of this ubiquitousness, anything that occurs which has a broadly negative impact or seems in anyway related to the purported aims of the conspirators can thus be plausibly attributed to them. Fourthly, the conspiratorial group is viewed by conspiracy theorists as virtually omnipotent. In the past this group has successfully overthrown empires and nations, corrupted whole societies, and destroyed entire civilizations and cultures, and it is said to be in the process of accomplishing the same thing at this very moment. Its members are secretly working in every nook and cranny of society, and are making use of every subversive technique known to mankind to achieve their nefarious purposes. Nothing appears to be able to stand in their way--unless the warnings of the conspiracy theorists are heeded and acted upon at once. Even then there is no guarantee of ultimate victory against such powerful forces, but a failure to recognize the danger and take immediate countervailing action assures the success of those forces in the near future. Finally, for conspiracy theorists conspiracies are not simply a regular feature of politics whose importance varies in different historical contexts, but rather the motive force of all historical change and development. The conspiratorial group can and does continually alter the course of history, invariably in negative and destructive ways, through conscious planning and direct intervention. Its members are not buffeted about by structural forces beyond their control and understanding, like everyone else, but are themselves capable of controlling events more or less at will. This supposed ability is usually attributed to some combination of demonic influence or sponsorship, the possession of arcane knowledge, the mastery of devilish techniques, and/or the creation of a preternaturally effective clandestine organization. As a result, unpleasant occurrences which are perceived by others to be the products of coincidence or chance are viewed by conspiracy theorists as further evidence of the secret workings of the conspiratorial group. For them, nothing that happens occurs by accident. Everything is the result of secret plotting in accordance with some sinister design. This central characteristic of conspiracy theories has been aptly summed up by Donna Kossy in a popular book on fringe ideas: Conspiracy theories are like black holes--they suck in everything that comes their way, regardless of content or origin...Everything you've ever known or experienced, no matter how 'meaningless', once it contacts the conspiratorial universe, is enveloped by and cloaked in sinister significance. Once inside, the vortex gains in size and strength, sucking in everything you touch. (11) As an example of this sort of mechanism, one has only to mention the so-called 'umbrella man', a man who opened up an umbrella on a sunny day in Dealey Plaza just as President John F. Kennedy's motorcade was passing. A number of 'conspiracy theorists' have assumed that this man was signalling to the assassins, thus tying a seemingly trivial and inconsequential act into the alleged plot to kill Kennedy. It is precisely this totalistic, all-encompassing quality that distinguishes 'conspiracy theories' from the secret but often mundane political planning that is carried out on a daily basis by all sorts of groups, both within and outside of government. It should, however, be pointed out that even if the 'umbrella man' was wholly innocent of any involvement in a plot, as he almost certainly was, this does not mean that the Warren Commission's reconstruction of the assassination is accurate. However that may be, real covert politics, although by definition hidden or disguised and often deleterious in their impact, simply do not correspond to the bleak, simplistic image propounded by conspiracy theorists. Far from embodying metaphysical evil, they are perfectly and recognizably human, with all the positive and negative characteristics and potentialities which that implies. At the most basic level, all the efforts of individuals to privately plan and secretly initiate actions for their own perceived mutual benefit --insofar as these are intentionally withheld from outsiders and require the maintenance of secrecy for their success--are conspiracies. Moreover, in contrast to the claims of conspiracy theorists, covert politics are anything but monolithic. At any given point in time, there are dozens if not thousands of competitive political and economic groups engaging in secret planning and activities, and most are doing so in an effort to gain some advantage over their rivals among the others. Such behind-the-scene operations are present on every level, from the mundane efforts of small-scale retailers to gain competitive advantage by being the first to develop new product lines to the crucially important attempts by rival secret services to penetrate and manipulate each other. Sometimes the patterns of these covert rivalries and struggles are relatively stable over time, whereas at other times they appear fluid and kaleidoscopic, as different groups secretly shift alliances and change tactics in accordance with their perceived interests. Even internally, within particular groups operating clandestinely, there are typically bitter disagreements between various factions over the specific courses of action to be adopted. Unanimity of opinioon historical judgements. There is probably no way to prevent this sort of unconscious reaction in the current intellectual climate, but the least that can be expected of serious scholars is that they carefully examine the available evidence before dismissing these matters out of hand. Footnotes 1. Compare Robin Ramsay, 'Conspiracy, Conspiracy Theories and Conspiracy Research', Lobster 19 (1990), p. 25: 'In intellectually respectable company it is necessary to preface any reference to actual political, economic, military or paramilitary conspiracies with the disclaimer that the speaker "doesn't believe in the conspiracy theory of history (or politics)".'This type of disclaimer quite clearly reveals the speaker's inability to distinguish between bona fide conspiracy theories and actual conspiratorial politics. 2. The word 'suppress' is not too strong here. I personally know of at least one case in which a very bright graduate student at a prestigious East Coast university was unceremoniously told by his advisor that if he wanted to write a Ph.D. thesis on an interesting historical example of conspiratorial politics he would have to go elsewhere to do so. He ended up leaving academia altogether and became a professional journalist, in which capacity he has produced a number of interesting books and articles. 3. Complaints about this general academic neglect have often been made by those few scholars who have done research on key aspects of covert and clandestine politics which are directly relevant to this study. See, for example, Gary Marx, 'Thoughts on a Neglected Category of Social Movement Participant: The Agent Provocateur and the Informant', American Journal of Sociology 80:2 (September 1974), especially pp. 402-3. One of the few dissertations dealing directly with this topic, though not in a particularly skilful fashion, is Frederick A. Hoffman, 'Secret Roles and Provocation: Covert Operations in Movements for social Change' (Unpublished Ph.D. Dissertation: UCLA Sociology Department, 1979). There are, of course, some excellent academic studies which have given due weight to these matters--for example, Nurit Schleifman, Undercover Agents in the Russian Revolutionary Movement: The SR Party, 1902-1914 (Basingstoke: Macmillan/ St. Anthony's College, 1988); and Jean-Paul Brunet, La police de l'ombre: Indicateurs et provocateurs dans la France contemporaine (Paris: Seuil, 1990)--but such studies areunfortunately few and far between. 4. The standard academic treatments of conspiracy theories are Richard Hofstadter, 'The Paranoid Style in American Politics', in Hofstadter, The Paranoid Style in American Politics and Other Essays (New York: Knopf, 1966), pp. 3-40; Norman Cohn, Warrant for Genocide: The Myth of the Jewish World-Conspiracy and the Protocols of the Elders of Zion (Chico, CA: Scholars, 1981 [1969]); J. M. Roberts, The Mythology of the Secret Societies (London: Secker & Warburg, 1972); Johannes Rogallavon Bieberstein, Die These von der Verschwrung, 1776-1945: Philosophen, Freimaurer, Juden, Liberale und Sozialisten als Verschwrergegen die Sozialordnung (Frankfurt am Main: Peter Lang, 1976); and Carl F. Graumann and Serge Moscovici, eds., Changing Conceptions of Conspiracy (New York: Springer, 1987). See also the journalistic studies by George Johnson, Architects of Fear: Conspiracy Theories and paranoia in American Politics (Los Angeles: Tarcher, 1983); and Jonathan Vankin, Conspiracies, Cover-Ups, and Crimes: Political Manipulation and Mind Control in America (New York: Paragon House, 1992). 5. See Hofstadter, 'Paranoid Style', pp. 14, 29. 6. Although conspiracy theories have been widely accepted in the most disparate eras and parts of the world, and thus probably have a certain universality as explanatory models, at certain points in time they have taken on an added salience due to particular historical circumstances. Their development and diffusion seems to be broadly correlated with the level of social, economic, and political upheaval or change, though indigenous cultural values and intellectual traditions determine their specific form and condition their level of popularity. 7. As many scholars have pointed out, if such ideas were restricted to clinical paranoids, they would have little or no historical importance. What makes the conspiratorial or paranoid style of thought interesting and historically significant is that it frequently tempts more or less normal people and has often been diffused among broad sections of the population in certain periods. Conspiracy theories are important as collective delusions, delusions which nevertheless reflect real fears and real social problems, rather than as evidence of individual pathologies. See, for example, Hofstadter,'Paranoid Style', pp. 3-4. 8. See his Proofs of a Conspiracy Against All the Religions and Governments of Europe, Carried on in the Secret Meetings of free Masons, Illuminati, and Reading Societies, Collected from Good Authorities (New York: G. Forman, 1798), p. 14. This exhibits yet another characteristic of 'conspiracy theorists'--the tendency to over-dramatize everything by using capital letters with reckless abandon. 9. See his 'Geheime Denkschrift nber die Grundung eines Central-Comites der nordischen Machte in Wien', in Aus Metternichs nachgelassenen Papieren, ed. by Richard Metternich-Winneburg (Vienna: 1881),vol. 1, p. 595, cited in Rogalla von Bieberstein, These von der Verschwrung, pp. 139-40. 10. Dieter Groh, 'Temptation of Conspiracy Theory, Part I', in Changing Conceptions of Conspiracy, p. 3. A classic example of conspiratorial works that view modern revolutionary movements as little more than the latest manifestations of subversive forces with a very long historical pedigree is the influential book by Nesta H. Webster, Secret Societies and Subversive Movements (London: Boswell, 1924). For more on Webster's background, see the biographical study by Richard M. Gilman, Behind World Revolution: The Strange Career of Nesta H. Webster (Ann Arbor: Insight, 1982), of which only one volume has so far appeared. 11. Kooks: A Guide to the Outer Limits of Human Belief (Portland: Feral House, 1994), p. 191. 12. For more on P2, see above all the materials published by the Italian parliamentary commission investigating the organization, which are divided into the majority (Anselmi) report, five dissenting minority reports, and over one hundred thick volumes of attached documents and verbatim testimony before the commission. Compare also Martin Berger, Historia de la loggia masonica P2 (Buenos Aires: El Cid, 1983); Andrea Barbieri et al, L'Italia della P2 (Milan: Mondadori, 1981); Alberto Cecchi, Storia della P2 (Rome: Riuniti, 1985); Roberto Fabiani, I massoni in Italia (Milan: L'Espresso, 1978); Gianfranco Piazzesi, Gelli: La carriere di un eroe di questa Italia (Milan: Garzanti, 1983); Marco Ramat et al, La resistabile ascesa della P2: Poteri occulti e stato democratico (Bari: De Donato, 1983); Renato Risaliti, Licio Gelli, a carte scoperte (Florence: Fernando Brancato, 1991); and Gianni Rossi and Franceso Lombrassa, In nome della 'loggia': Le prove di come lamassoneria segreta ha tentato di impadronarsi dello stato italiano. Iretroscena della P2 (Rome: Napoleone, 1981). Pro P2 works include those of Gelli supporter Pier Carpi, Il caso Gelli: La verita sulla loggia P2 (Bologna: INEI, 1982); and the truly Orwellian work by Gelli himself, La verita (Lugano: Demetra, 1989), which in spite of its title bears little resemblance to the truth. 13. For the AB, see Ivor Wilkins and Hans Strydom, The Super-Afrikaners: Inside the Afrikaner Broederbond (Johannesburg: Jonathan Ball, 1978); and J.H.P.Serfontein, Brotherhood of Power: An Expose of the Secret Afrikaner Broederbond (Bloomington and London: Indiana University, 1978).Compare also B. M. Schoeman, Die Broederbond in die Afrikaner-politiek (Pretoria: Aktuele, 1982); and Adrien Pelzer, Die Afrikaner-Broederbond: Eerste 50 jaar (Cape Town: Tafelberg, 1979). 14. See his Historians' Fallacies: Toward a Logic of Historical Thought (New York: Harper & Row, 1970), pp. 74-8. From checker at panix.com Tue Jul 12 18:58:50 2005 From: checker at panix.com (Premise Checker) Date: Tue, 12 Jul 2005 14:58:50 -0400 (EDT) Subject: [Paleopsych] NS: Did humans evolve in fits and starts? Message-ID: Did humans evolve in fits and starts? http://www.newscientist.com/article.ns?id=dn7539&print=true * 17:30 17 June 2005 * Gaia Vince Humans may have evolved during a few rapid bursts of genetic change, according to a new study of the human genome, which challenges the popular theory that evolution is a gradual process. Researchers studying human chromosome 2 have discovered that the bulk of its DNA changes occurred in a relatively short period of time and, since then, only minor alterations have occurred. This backs a theory called punctuated equilibrium which suggests that evolution actually occurred as a series of jumps with long static periods between them. Evolutionary stages are marked by changes to the DNA sequences on chromosomes. One of the ways in which chromosomes are altered is through the duplications of sections of the chromosomes. These DNA fragments may be duplicated and inserted back into the chromosome, resulting in two copies of the section. Evan Eichler, associate professor of genomic sciences at the University of Washington in Seattle, US, and colleagues looked at duplicated DNA sequences on a specific section of chromosome 2, to compare them with ape genomes and Old World monkey genomes. They expected to find that duplications had occurred gradually over the last few million years. Instead, they found that the big duplications had occurred in a short period of time, relatively speaking, after which only smaller rearrangements occurred. Eichler found the bulk of the duplications were present in the genomes of humans, chimpanzees, gorillas and orang-utans, but were absent in Old World monkeys - such as baboons and macaques. Narrow window An analysis of the degree of chromosomal decay for this section showed that the major duplications occurred in the narrow window of evolutionary time between 20 million and 10 million years ago, after human ancestors had split from Old World monkeys, but before the divergence of humans and great apes. It is unclear why [these duplication] events occurred so frequently during this period of human and great ape evolutionary history. It is also unclear as to why they suddenly cease, at least in this region of chromosome 2, Eichler says. Other regions may show different temporal biases. The important implication here is that episodic bursts of activity challenge the concept of gradual clock-like changes during the course of genome evolution, he says. Since duplications are important in the birth of new genes and large-scale chromosomal rearrangements, it may follow that these processes may have gone through similar episodes of activity followed by quiescence. Growing evidence Laurence Hurst, professor of evolutionary biology at the University of Bath in the UK, says the study was very interesting, although he would like to see this punctuated evolution demonstrated for other chromosomes, to be more confident that this is a general pattern. There is growing evidence that evolutionary processes may occur in bursts. We now know, for example, that 50 million years ago there was a burst of activity that resulted in lots of new genes being produced, he told New Scientist. It is unknown what effect the sudden duplication activity may have had on chromosome 2. Eichler theorises that it may have resulted in genes for increased brain size or pathogen evasion. If specific regions of chromosomes can have very punctuated events, it means our models based on gradual evolution are probably wrong, he says. The group will continue looking at the chromosome duplications to try and correlate them with changes in gene function or expression. Journal reference: Genome Research (vol 15, p 914) Related Articles Hominid inbreeding left humans vulnerable to disease http://www.newscientist.com/article.ns?id=dn6920 25 January 2005 Jumping genes help choreograph development http://www.newscientist.com/article.ns?id=mg18424702.700 October 2004 Wonderful spam http://www.newscientist.com/article.ns?id=mg18224496.100 29 May 2004 Weblinks Eichler Laboratory, University of Washington, Seattle, US http://eichlerlab.gs.washington.edu/ Laurence Hurst, Bath University, UK http://www.bath.ac.uk/bio-sci/hurst.htm Genome Research http://www.genome.org/ From checker at panix.com Tue Jul 12 18:58:36 2005 From: checker at panix.com (Premise Checker) Date: Tue, 12 Jul 2005 14:58:36 -0400 (EDT) Subject: [Paleopsych] Technology Review: The Fading Memory of the State Message-ID: The Fading Memory of the State http://www.technologyreview.com/articles/05/07/issue/feature_memory.asp?p=0 By David Talbot July 2005 First, the summary, dated 5.7.11 The Chronicle of Higher Education: Magazine & journal reader http://chronicle.com/prm/daily/2005/07/2005071101j.htm A glance at the July issue of Technology Review: Preserving digital history The U.S. National Archives and Records Administration is struggling to find a way to preserve the enormous volume of digital records the federal government produces, writes David Talbot, a senior editor at the magazine. "Electronic records rot much faster than paper ones, and NARA must either figure out how to save them permanently or allow the nation to lose its grip on history," he argues. While the "most famous documents in NARA's possession -- the Declaration of Independence, the Constitution, and the Bill of Rights -- were written on durable calfskin parchment and can safely recline for decades behind glass in a bath of argon gas," it will take "a technological miracle to make digital data last that long," he writes. To make matters worse, NARA is facing "thousands of incompatible data formats cooked up by the computer industry over the past several decades, not to mention the limited life span of electronic-storage media themselves," Mr. Talbot writes. And the records continue to pile up, he writes. Mr. Talbot quotes Eduard Mark, a U.S. Air Force historian, as saying that already "history as we have known it is dying, and with it the public accountability of government and rational public administration." If NARA doesn't act quickly, Mr. Talbot concludes, today's history will be lost. --Gabriela Montell --------------------------- The official repository of retired U.S. government records is a boxy white building tucked into the woods of suburban College Park, MD. The National Archives and Records Administration (NARA) is a subdued place, with researchers quietly thumbing through boxes of old census, diplomatic, or military records, and occasionally requesting a copy of one of the computer tapes that fill racks on the climate-controlled upper floors. Researchers generally don't come here to look for contemporary records, though. Those are increasingly digital, and still repose largely at the agencies that created them, or in temporary holding centers. It will take years, or decades, for them to reach NARA, which is charged with saving the retired records of the federal government (NARA preserves all White House records and around 2 percent of all other federal records; it also manages the libraries of 12 recent presidents). Unfortunately, NARA doesn't have decades to come up with ways to preserve this data. Electronic records rot much faster than paper ones, and NARA must either figure out how to save them permanently, or allow the nation to lose its grip on history. One clear morning earlier this year, I walked into a fourth-floor office overlooking the woods. I was there to ask Allen Weinstein--sworn in as the new Archivist of the United States in February--how NARA will deal with what some have called the pending "tsunami" of digital records. Weinstein is a former professor of history at Smith College and Georgetown University and the author of Perjury: The Hiss-Chambers Case (1978) and coauthor of The Story of America (2002). He is 67, and freely admits to limited technical knowledge. But a personal experience he related illustrates quite well the challenges he faces. In 1972, Weinstein was a young historian suing for the release of old FBI files. FBI director J. Edgar Hoover--who oversaw a vast machine of domestic espionage--saw a Washington Post story about his efforts, wrote a memo to an aide, attached the Post article and penned into the newspaper's margin: "What do we know about Weinstein?" It was a telling note about the mind-set of the FBI director and of the federal bureaucracy of that era. And it was saved--Weinstein later found the clipping in his own FBI file. But it's doubtful such a record would be preserved today, because it would likely be "born digital" and follow a convoluted electronic path. A modern-day J. Edgar Hoover might first use a Web browser to read an online version of the Washington Post. He'd follow a link to the Weinstein story. Then he'd send an e-mail containing the link to a subordinate, with a text note: "What do we know about Weinstein?" The subordinate might do a Google search and other electronic searches of Weinstein's life, then write and revise a memo in Microsoft Word 2003, and even create a multimedia PowerPoint presentation about his findings before sending both as attachments back to his boss. Definitions Megabyte 1,024 kilobytes. The length of a short novel or the storage available on an average floppy disk. Gigabyte 1,024 megabytes. Roughly 100 minutes of CD-quality stereo sound. Terabyte 1,024 gigabytes. Half of the content in an academic research library. Petabyte 1,024 terabytes. Half of the content in all U.S. academic research libraries. Exabyte 1,024 petabytes. Half of all the information generated in 1999. What steps in this process can be easily documented and reliably preserved over decades with today's technology? The short answer: none. "They're all hard problems," says Robert Chadduck, a research director and computer engineer at NARA. And they are symbolic of the challenge facing any organization that needs to retain electronic records for historical or business purposes. Imagine losing all your tax records, your high school and college yearbooks, and your child's baby pictures and videos. Now multiply such a loss across every federal agency storing terabytes of information, much of which must be preserved by law. That's the disaster NARA is racing to prevent. It is confronting thousands of incompatible data formats cooked up by the computer industry over the past several decades, not to mention the limited lifespan of electronic storage media themselves. The most famous documents in NARA's possession--the Declaration of Independence, the Constitution, and the Bill of Rights--were written on durable calfskin parchment and can safely recline for decades behind glass in a bath of argon gas. It will take a technological miracle to make digital data last that long. But NARA has hired two contractors--Harris Corporation and Lockheed Martin--to attempt that miracle. The companies are scheduled to submit competing preliminary designs next month for a permanent Electronic Records Archives (ERA). According to NARA's specifications, the system must ultimately be able to absorb any of the 16,000 other software formats believed to be in use throughout the federal bureaucracy--and, at the same time, cope with any future changes in file-reading software and storage hardware. It must ensure that stored records are authentic, available online, and impervious to hacker or terrorist attack. While Congress has authorized $100 million and President Bush's 2006 budget proposes another $36 million, the total price tag is unknown. NARA hopes to roll out the system in stages between 2007 and 2011. If all goes well, Weinstein says, the agency "will have achieved the start of a technological breakthrough equivalent in our field to major 'crash programs' of an earlier era--our Manhattan Project, if you will, or our moon shot." Data Indigestion NARA's crash data-preservation project is coming none too soon; today's history is born digital and dies young. Many observers have noted this, but perhaps none more eloquently than a U.S. Air Force historian named Eduard Mark. In a 2003 posting to a Michigan State University discussion group frequented by fellow historians, he wrote: "It will be impossible to write the history of recent diplomatic and military history as we have written about World War II and the early Cold War. Too many records are gone. Think of Villon's haunting refrain, 'Ou sont les neiges d'antan?' and weep....History as we have known it is dying, and with it the public accountability of government and rational public administration." Take the 1989 U.S. invasion of Panama, in which U.S. forces removed Manuel Noriega and 23 troops lost their lives, along with at least 200 Panamanian fighters and 300 civilians. Mark wrote (and recently stood by his comments) that he could not secure many basic records of the invasion, because a number were electronic and had not been kept. "The federal system for maintaining records has in many agencies--indeed in every agency with which I am familiar--collapsed utterly," Mark wrote. Of course, managing growing data collections is already a crisis for many institutions, from hospitals to banks to universities. Tom Hawk, general manager for enterprise storage at IBM, says that in the next three years, humanity will generate more data--from websites to digital photos and video--than it generated in the previous 1,000 years. "It's a whole new set of challenges to IT organizations that have not been dealing with that level of data and complexity," Hawk says. In 1996, companies spent 11 percent of their IT budgets on storage, but that figure will likely double to 22 percent in 2007, according to International Technology Group of Los Altos, CA. Still, NARA's problem stands out because of the sheer volume of the records the U.S. government produces and receives, and the diversity of digital technologies they represent. "We operate on the premise that somewhere in the government they are using every software program that has ever been sold, and some that were never sold because they were developed for the government," says Ken Thibodeau, director of the Archives' electronic-records program. The scope of the problem, he adds, is "unlimited, and it's open ended, because the formats keep changing." The Archives faces more than a Babel of formats; the electronic records it will eventually inherit are piling up at an ever accelerating pace. A taste: the Pentagon generates tens of millions of images from personnel files each year; the Clinton White House generated 38 million e-mail messages (and the current Bush White House is expected to generate triple that number); and the 2000 census returns were converted into more than 600 million TIFF-format image files, some 40 terabytes of data. A single patent application can contain a million pages, plus complex files like 3-D models of proteins or CAD drawings of aircraft parts. All told, NARA expects to receive 347 petabytes (see "Definitions") of electronic records by 2022. Currently, the Archives holds only a trivial number of electronic records. Stored on steel racks in NARA's 11-year-old facility in College Park, the digital collection adds up to just five terabytes. Most of it consists of magnetic tapes of varying ages, many of them holding a mere 200 megabytes apiece--about the size of 10 high-resolution digital photographs. (The electronic holdings include such historical gems as records of military psychological-operations squads in Vietnam from 1970 to 1973, and interviews, diaries, and testimony collected by the U.S. Department of Justice's Watergate Special Prosecution Force from 1973 to 1977.) From this modest collection, only a tiny number of visitors ever seek to copy data; little is available over the Internet. Because the Archives has no good system for taking in more data, a tremendous backlog has built up. Census records, service records, Pentagon records of Iraq War decision-making, diplomatic messages--all sit in limbo at federal departments or in temporary record-holding centers around the country. A new avalanche of records from the Bush administration--the most electronic presidency yet--will descend in three and a half years, when the president leaves office. Leaving records sitting around at federal agencies for years, or decades, worked fine when everything was on paper, but data bits are nowhere near as reliable--and storing them means paying not just for the storage media, but for a sophisticated management system and extensive IT staff. Data under the Desk The good news is that at least some of the rocket science behind the Archives' "moon shot" is already being developed by industry, other U.S. government agencies, and foreign governments. For example, Hewlett-Packard, IBM, EMC, PolyServe, and other companies have developed "virtual storage" technologies that automatically spread terabytes of related data across many storage devices, often of different types. Virtualization frees up IT staff, balances loads when demand for the data spikes, and allows hardware upgrades to be carried out without downtime. Although the Archives will need technologies far beyond virtual storage, the commercial efforts form a practical foundation. The Archives may also benefit from the examples of digital archives set up in other nations, such as Australia, where archivists are using open-source software called XENA (for XML Electronic Normalizing of Archives) to convert records into a standardized format that will, theoretically, be readable by future technologies. NARA will also follow the lead of the U.S. Library of Congress, which in recent years has begun digitizing collections ranging from early American sheet music to immigration photographs and putting them online, as part of a $100 million digital preservation program. But to extend the technology beyond such commercial and government efforts, NARA and the National Science Foundation are funding research at places like the San Diego Supercomputer Center. There, researchers are, among other things, learning how to extract data from old formats rapidly and make them useful in modern ones. For example, San Diego researchers took a collection of data on airdrops during the Vietnam War--everything from the defoliant Agent Orange to pamphlets--and reformatted it so it could be displayed using nonproprietary versions of digital-mapping programs known as geographic information systems, or GIS (see [3]"Do Maps Have Morals?" Technology Review, June 2005). Similarly, they took lists of Vietnam War casualties and put them in a database that can show how they changed over the years, as names were added or removed. These are the kinds of problems NARA will face as it "ingests" digital collections, researchers say. "NARA's problem is they will be receiving massive amounts of digital information in the future, and they need technologies that will help them import that data into their ERA--hundreds of millions of items, hundreds of terabytes of data," says Reagan Moore, director of data-knowledge computing at the San Diego center. Another hive of research activity on massive data repositories: MIT. Just as the government is losing its grip on administrative, military, and diplomatic history, institutions like MIT are losing their hold on research data--including the early studies and communications that led to the creation of the Internet itself. "MIT is a microcosm of the problems [NARA] has every day," says MacKenzie Smith, the associate director for technology at MIT Libraries. "The faculty members are keeping their research under their desks, on lots and lots of disks, and praying that nothing happens to it. We have a long way to go." Now MIT is giving faculty another place to put that data. Researchers can log onto the Internet and upload information--whether text, audio, video, images, or experimental data sets--into DSpace, a storage system created in collaboration with Hewlett-Packard and launched in 2002 (see "[4]MIT's DSpace Explained"). DSpace makes two identical copies of all data, catalogues relevant information about the data (what archivists call "metadata," such as the author and creation date), and gives each file a URL or Web address. This address won't change even if, say, the archivist later wants to put a given file into a newer format--exporting the contents of an old Word document into a PDF file, for instance. Indeed, an optional feature in DSpace will tell researchers which files are ready for such "migration." Because the software behind DSpace is open source, it is available for other institutions to adapt to their own digital-archiving needs; scores have already done so. Researchers at MIT and elsewhere are working on improvements such as an auditing feature that would verify that a file hasn't been corrupted or tampered with, and a system that checks accuracy when a file migrates into a new format. Ann Wolpert, the director of MIT Libraries (and chair of Technology Review's board of directors), says DSpace is just a small step toward tackling MIT's problems, never mind NARA's. "These changes have come to MIT and other institutions so rapidly that we didn't have the technology to deal with it," Wolpert says. "The technology solutions are still emerging." Robert Tansley, a Hewlett-Packard research scientist who worked on DSpace, says the system is a good start but cautions that "it is still quite new. It hasn't been tested or deployed at a massive scale, so there would need to be some work before it could support what the National Archives is looking at." Digital Marginalia But for all this promise, NARA faces many problems that researchers haven't even begun to think about. Consider Weinstein's discovery of the Hoover marginalia. How could such a tidbit be preserved today? And how can any organization that needs to track information--where it goes, who uses it, and how it's modified along the way--capture those bit streams and keep them as safe as older paper records? Saving the text of e-mail messages is technically easy; the challenge lies in managing a vast volume and saving only what's relevant. It's important, for example, to save the e-mails of major figures like cabinet members and White House personnel without also bequeathing to history trivial messages in which mid-level bureaucrats make lunch arrangements. The filtering problem gets harder as the e-mails pile up. "If you have 300 or 400 million of anything, the first thing you need is a rigorous technology that can deal with that volume and scale," says Chadduck. More and more e-mails come with attachments, so NARA will ultimately need a system that can handle any type of attached file. Version tracking is another headache. In an earlier era, scribbled cross-outs and margin notes on draft speeches were a boon to understanding the thinking of presidents and other public officials. To see all the features of a given Microsoft Word document, such as tracked changes, it's best to open the document using the same version of Word that the document's creator used. This means that future researchers will need not only a new piece of metadata--what software version was used--but perhaps even the software itself, in order to re-create fonts and other formatting details faithfully. But saving the functionality of software--from desktop programs like Word to the software NASA used to test a virtual-reality model of the Mars Global Surveyor, for example--is a key research problem. And not all software keeps track of how it was actually used. Why might this matter? Consider the 1999 U.S. bombing of the Chinese embassy in Belgrade. U.S. officials blamed the error on outdated maps used in targeting. But how would a future historian probe a comparable matter--to check the official story, for example--when decision-making occurred in a digital context? Today's planners would open a map generated by GIS software, zoom in on a particular region, pan across to another site, run a calculation about the topography or other features, and make a targeting decision. If a historian wanted to review these steps, he or she would need information on how the GIS map was used. But "currently there are no computer science tools that would allow you to reconstruct how computers were used in highconfidence decision-making scenarios," says Peter Bajcsy, a computer scientist at the University of Illinois at Urbana-Champaign. "You might or might not have the same hardware, okay, or the same version of the software in 10 or 20 years. But you would still like to know what data sets were viewed and processed, the methods used for processing, and what the decision was based on." That way, to stay with the Chinese embassy example, a future historian might be able to independently assess whether the database about the embassy was obsolete, or whether the fighter pilot who dropped the bomb had the right information before he took off. Producing such data is just a research proposal of Bajcsy's. NARA says that if such data is collected in the future, the agency will add it to the list of things needing preservation. Data Curators Even without tackling problems like this, NARA has its hands full. For three years, at NARA's request, a National Academy of Sciences panel has been advising the agency on its electronic-records program. The panel's chairman, computer scientist Robert F. Sproull of Sun Microsystems Laboratories in Burlington, MA, says he has urged NARA officials to scale back their ambitions for the ERA, at least at the start. "They are going to the all-singing, all-dancing solution rather than an incremental approach," Sproull says. "There are a few dozen formats that would cover most of what [NARA] has to do. They should get on with it. Make choices, encourage people submitting records to choose formats, and get on with it. If you become obsessed with getting the technical solution, you will never build an archive." Sproull counsels pragmatism above all. He points to Google as an example of how to deploy a workable solution that satisfies most information-gathering needs for most of the millions of people who use it. "What Google says is, 'We'll take all comers, and use best efforts. It means we won't find everything, but it does mean we can cope with all the data,'" Sproull says. Google is not an archive, he notes, but in the Google spirit, NARA should attack the problem in a practical manner. That would mean starting with the few dozen formats that are most common, using whatever off-the-shelf archiving technologies will likely emerge over the next few years. But this kind of preservation-by-triage may not be an option, says NARA's Thibodeau. "NARA does not have discretion to refuse to preserve a format," he says. "It is inconceivable to me that a court would approve of a decision not to preserve e-mail attachments, which often contain the main substance of the communication, because it's not in a format NARA chose to preserve." Meanwhile, the data keep rolling in. After the 9/11 Commission issued its report on the attacks on the World Trade Center and the Pentagon, for example, it shut down and consigned all its records to NARA. A good deal of paper, along with 1.2 terabytes of digital information on computer hard disks and servers, was wheeled into NARA's College Park facility, where it sits behind a door monitored by a video camera and secured with a black combination lock. Most of the data, which consist largely of word-processing files and e-mails and their attachments, are sealed by law until January 2, 2009. They will probably survive that long without heroic preservation efforts. But "there's every reason to say that in 25 years, you won't be able to read this stuff," warns Thibodeau. "Our present will never become anybody's past." It doesn't have to be that way. Projects like DSpace are already dealing with the problem. Industry will provide a growing range of partial solutions, and researchers will continue to fill in the blanks. But clearly, in the decades to come, archives such as NARA will need to be staffed by a new kind of professional, an expert with the historian's eye of an Allen Weinstein but a computer scientist's understanding of storage technologies and a librarian's fluency with metadata. "We will have to create a new profession of 'data curator'--a combination of scientist (or other data specialist), statistician, and information expert," says MacKenzie Smith of the MIT Libraries. The nation's founding documents are preserved for the ages in their bath of argon gas. But in another 230 years or so, what of today's electronic records will survive? With any luck, the warnings from air force historian Mark and NARA's Thibodeau will be heeded. And historians and citizens alike will be able to go online and find that NARA made it to the moon, after all. References 3. http://www.technologyreview.com/articles/05/06/issue/review_maps.asp 4. http://www.technologyreview.com/articles/05/07/issue/feature_mit.asp From checker at panix.com Tue Jul 12 19:30:17 2005 From: checker at panix.com (Premise Checker) Date: Tue, 12 Jul 2005 15:30:17 -0400 (EDT) Subject: [Paleopsych] Scientific American: The Mysteries of Mass Message-ID: The Mysteries of Mass http://www.sciam.com/print_version.cfm?articleID=000005FC-2927-12B3-A92783414B7F0000 June 27, 2005 Physicists are hunting for an elusive particle that would reveal the presence of a new kind of field that permeates all of reality. Finding that Higgs field will give us a more complete understanding about how the universe works By Gordon Kane Most people think they know what mass is, but they understand only part of the story. For instance, an elephant is clearly bulkier and weighs more than an ant. Even in the absence of gravity, the elephant would have greater mass--it would be harder to push and set in motion. Obviously the elephant is more massive because it is made of many more atoms than the ant is, but what determines the masses of the individual atoms? What about the elementary particles that make up the atoms--what determines their masses? Indeed, why do they even have mass? We see that the problem of mass has two independent aspects. First, we need to learn how mass arises at all. It turns out mass results from at least three different mechanisms, which I will describe below. A key player in physicists' tentative theories about mass is a new kind of field that permeates all of reality, called the Higgs field. Elementary particle masses are thought to come about from the interaction with the Higgs field. If the Higgs field exists, theory demands that it have an associated particle, the Higgs boson. Using particle accelerators, scientists are now hunting for the Higgs. The second aspect is that scientists want to know why different species of elementary particles have their specific quantities of mass. Their intrinsic masses span at least 11 orders of magnitude, but we do not yet know why that should be so. For comparison, an elephant and the smallest of ants differ by about 11 orders of magnitude of mass. What Is Mass? Isaac newton presented the earliest scientific definition of mass in 1687 in his landmark Principia: "The quantity of matter is the measure of the same, arising from its density and bulk conjointly." That very basic definition was good enough for Newton and other scientists for more than 200 years. They understood that science should proceed first by describing how things work and later by understanding why. In recent years, however, the why of mass has become a research topic in physics. Understanding the meaning and origins of mass will complete and extend the Standard Model of particle physics, the well-established theory that describes the known elementary particles and their interactions. It will also resolve mysteries such as dark matter, which makes up about 25 percent of the universe. _________________________________________________________________ Why is the Higgs field present throughout the universe? What is the Higgs field? _________________________________________________________________ The foundation of our modern understanding of mass is far more intricate than Newton's definition and is based on the Standard Model. At the heart of the Standard Model is a mathematical function called a Lagrangian, which represents how the various particles interact. From that function, by following rules known as relativistic quantum theory, physicists can calculate the behavior of the elementary particles, including how they come together to form compound particles, such as protons. For both the elementary particles and the compound ones, we can then calculate how they will respond to forces, and for a force F, we can write Newton's equation F = ma, which relates the force, the mass and the resulting acceleration. The Lagrangian tells us what to use for m here, and that is what is meant by the mass of the particle. But mass, as we ordinarily understand it, shows up in more than just F = ma. For example, Einstein's special relativity theory predicts that massless particles in a vacuum travel at the speed of light and that particles with mass travel more slowly, in a way that can be calculated if we know their mass. The laws of gravity predict that gravity acts on mass and energy as well, in a precise manner. The quantity m deduced from the Lagrangian for each particle behaves correctly in all those ways, just as we expect for a given mass. Fundamental particles have an intrinsic mass known as their rest mass (those with zero rest mass are called massless). For a compound particle, the constituents' rest mass and also their kinetic energy of motion and potential energy of interactions contribute to the particle's total mass. Energy and mass are related, as described by Einstein's famous equation, E = mc^2 (energy equals mass times the speed of light squared). An example of energy contributing to mass occurs in the most familiar kind of matter in the universe--the protons and neutrons that make up atomic nuclei in stars, planets, people and all that we see. These particles amount to 4 to 5 percent of the mass-energy of the universe. The Standard Model tells us that protons and neutrons are composed of elementary particles called quarks that are bound together by massless particles called gluons. Although the constituents are whirling around inside each proton, from outside we see a proton as a coherent object with an intrinsic mass, which is given by adding up the masses and energies of its constituents. The Standard Model lets us calculate that nearly all the mass of protons and neutrons is from the kinetic energy of their constituent quarks and gluons (the remainder is from the quarks' rest mass). Thus, about 4 to 5 percent of the entire universe--almost all the familiar matter around us--comes from the energy of motion of quarks and gluons in protons and neutrons. The Higgs Mechanism Unlike protons and neutrons, truly elementary particles--such as quarks and electrons--are not made up of smaller pieces. The explanation of how they acquire their rest masses gets to the very heart of the problem of the origin of mass. As I noted above, the account proposed by contemporary theoretical physics is that fundamental particle masses arise from interactions with the Higgs field. But why is the Higgs field present throughout the universe? Why isn't its strength essentially zero on cosmic scales, like the electromagnetic field? What is the Higgs field? The Higgs field is a quantum field. That may sound mysterious, but the fact is that all elementary particles arise as quanta of a corresponding quantum field. The electromagnetic field is also a quantum field (its corresponding elementary particle is the photon). So in this respect, the Higgs field is no more enigmatic than electrons and light. The Higgs field does, however, differ from all other quantum fields in three crucial ways. The first difference is somewhat technical. All fields have a property called spin, an intrinsic quantity of angular momentum that is carried by each of their particles. Particles such as electrons have spin 1/2 and most particles associated with a force, such as the photon, have spin 1. The Higgs boson (the particle of the Higgs field) has spin 0. Having 0 spin enables the Higgs field to appear in the Lagrangian in different ways than the other particles do, which in turn allows--and leads to--its other two distinguishing features. The second unique property of the Higgs field explains how and why it has nonzero strength throughout the universe. Any system, including a universe, will tumble into its lowest energy state, like a ball bouncing down to the bottom of a valley. For the familiar fields, such as the electromagnetic fields that give us radio broadcasts, the lowest energy state is the one in which the fields have zero value (that is, the fields vanish)--if any nonzero field is introduced, the energy stored in the fields increases the net energy of the system. But for the Higgs field, the energy of the universe is lower if the field is not zero but instead has a constant nonzero value. In terms of the valley metaphor, for ordinary fields the valley floor is at the location of zero field; for the Higgs, the valley has a hillock at its center (at zero field) and the lowest point of the valley forms a circle around the hillock. The universe, like a ball, comes to rest somewhere on this circular trench, which corresponds to a nonzero value of the field. That is, in its natural, lowest energy state, the universe is permeated throughout by a nonzero Higgs field. The final distinguishing characteristic of the Higgs field is the form of its interactions with the other particles. Particles that interact with the Higgs field behave as if they have mass, proportional to the strength of the field times the strength of the interaction. The masses arise from the terms in the Lagrangian that have the particles interacting with the Higgs field. Our understanding of all this is not yet complete, however, and we are not sure how many kinds of Higgs fields there are. Although the Standard Model requires only one Higgs field to generate all the elementary particle masses, physicists know that the Standard Model must be superseded by a more complete theory. Leading contenders are extensions of the Standard Model known as Supersymmetric Standard Models (SSMs). In these models, each Standard Model particle has a so-called superpartner (as yet undetected) with closely related properties [see "The Dawn of Physics beyond the Standard Model," by Gordon Kane; Scientific American, June 2003]. With the Supersymmetric Standard Model, at least two different kinds of Higgs fields are needed. Interactions with those two fields give mass to the Standard Model particles. They also give some (but not all) mass to the superpartners. The two Higgs fields give rise to five species of Higgs boson: three that are electrically neutral and two that are charged. The masses of particles called neutrinos, which are tiny compared with other particle masses, could arise rather indirectly from these interactions or from yet a third kind of Higgs field. Theorists have several reasons for expecting the SSM picture of the Higgs interaction to be correct. First, without the Higgs mechanism, the W and Z bosons that mediate the weak force would be massless, just like the photon (which they are related to), and the weak interaction would be as strong as the electromagnetic one. Theory holds that the Higgs mechanism confers mass to the W and Z in a very special manner. Predictions of that approach (such as the ratio of the W and Z masses) have been confirmed experimentally. Second, essentially all other aspects of the Standard Model have been well tested, and with such a detailed, interlocking theory it is difficult to change one part (such as the Higgs) without affecting the rest. For example, the analysis of precision measurements of W and Z boson properties led to the accurate prediction of the top quark mass before the top quark had been directly produced. Changing the Higgs mechanism would spoil that and other successful predictions. Third, the Standard Model Higgs mechanism works very well for giving mass to all the Standard Model particles, W and Z bosons, as well as quarks and leptons; the alternative proposals usually do not. Next, unlike the other theories, the SSM provides a framework to unify our understanding of the forces of nature. Finally, the SSM can explain why the energy "valley" for the universe has the shape needed by the Higgs mechanism. In the basic Standard Model the shape of the valley has to be put in as a postulate, but in the SSM that shape can be derived mathematically. Testing the Theory Naturally, physicists want to carry out direct tests of the idea that mass arises from the interactions with the different Higgs fields. We can test three key features. First, we can look for the signature particles called Higgs bosons. These quanta must exist, or else the explanation is not right. Physicists are currently looking for Higgs bosons at the Tevatron Collider at Fermi National Accelerator Laboratory in Batavia, Ill. Second, once they are detected we can observe how Higgs bosons interact with other particles. The very same terms in the Lagrangian that determine the masses of the particles also fix the properties of such interactions. So we can conduct experiments to test quantitatively the presence of interaction terms of that type. The strength of the interaction and the amount of particle mass are uniquely connected. Third, different sets of Higgs fields, as occur in the Standard Model or in the various SSMs, imply different sets of Higgs bosons with various properties, so tests can distinguish these alternatives, too. All that we need to carry out the tests are appropriate particle colliders--ones that have sufficient energy to produce the different Higgs bosons, sufficient intensity to make enough of them and very good detectors to analyze what is produced. A practical problem with performing such tests is that we do not yet understand the theories well enough to calculate what masses the Higgs bosons themselves should have, which makes searching for them more difficult because one must examine a range of masses. A combination of theoretical reasoning and data from experiments guides us about roughly what masses to expect. The Large Electron-Positron Collider (LEP) at CERN, the European laboratory for particle physics near Geneva, operated over a mass range that had a significant chance of including a Higgs boson. It did not find one--although there was tantalizing evidence for one just at the limits of the collider's energy and intensity--before it was shut down in 2000 to make room for constructing a newer facility, CERN's Large Hadron Collider (LHC). The Higgs must therefore be heavier than about 120 proton masses. Nevertheless, LEP did produce indirect evidence that a Higgs boson exists: experimenters at LEP made a number of precise measurements, which can be combined with similar measurements from the Tevatron and the collider at the Stanford Linear Accelerator Center. The entire set of data agrees well with theory only if certain interactions of particles with the lightest Higgs boson are included and only if the lightest Higgs boson is not heavier than about 200 proton masses. That provides researchers with an upper limit for the mass of the Higgs boson, which helps focus the search. _________________________________________________________________ The LEP collider saw tantalizing evidence for the Higgs particle. _________________________________________________________________ For the next few years, the only collider that could produce direct evidence for Higgs bosons will be the Tevatron. Its energy is sufficient to discover a Higgs boson in the range of masses implied by the indirect LEP evidence, if it can consistently achieve the beam intensity it was expected to have, which so far has not been possible. In 2007 the LHC, which is seven times more energetic and is designed to have far more intensity than the Tevatron, is scheduled to begin taking data. It will be a factory for Higgs bosons (meaning it will produce many of the particles a day). Assuming the LHC functions as planned, gathering the relevant data and learning how to interpret it should take one to two years. Carrying out the complete tests that show in detail that the interactions with Higgs fields are providing the mass will require a new electron-positron collider in addition to the LHC (which collides protons) and the Tevatron (which collides protons and antiprotons). Dark Matter What is discovered about Higgs bosons will not only test whether the Higgs mechanism is indeed providing mass, it will also point the way to how the Standard Model can be extended to solve problems such as the origin of dark matter. With regard to dark matter, a key particle of the SSM is the lightest superpartner (LSP). Among the superpartners of the known Standard Model particles predicted by the SSM, the LSP is the one with the lowest mass. Most superpartners decay promptly to lower-mass superpartners, a chain of decays that ends with the LSP, which is stable because it has no lighter particle that it can decay into. (When a superpartner decays, at least one of the decay products should be another superpartner; it should not decay entirely into Standard Model particles.) Superpartner particles would have been created early in the big bang but then promptly decayed into LSPs. The LSP is the leading candidate particle for dark matter. The Higgs bosons may also directly affect the amount of dark matter in the universe. We know that the amount of LSPs today should be less than the amount shortly after the big bang, because some would have collided and annihilated into quarks and leptons and photons, and the annihilation rate may be dominated by LSPs interacting with Higgs bosons. As mentioned earlier, the two basic SSM Higgs fields give mass to the Standard Model particles and some mass to the superpartners, such as the LSP. The superpartners acquire more mass via additional interactions, which may be with still further Higgs fields or with fields similar to the Higgs. We have theoretical models of how these processes can happen, but until we have data on the superpartners themselves we will not know how they work in detail. Such data are expected from the LHC or perhaps even from the Tevatron. Neutrino masses may also arise from interactions with additional Higgs or Higgs-like fields, in a very interesting way. Neutrinos were originally assumed to be massless, but since 1979 theorists have predicted that they have small masses, and over the past decade several impressive experiments have confirmed the predictions [see "Solving the Solar Neutrino Problem," by Arthur B. McDonald, Joshua R. Klein and David L. Wark; Scientific American, April 2003]. The neutrino masses are less than a millionth the size of the next smallest mass, the electron mass. Because neutrinos are electrically neutral, the theoretical description of their masses is more subtle than for charged particles. Several processes contribute to the mass of each neutrino species, and for technical reasons the actual mass value emerges from solving an equation rather than just adding the terms. Thus, we have understood the three ways that mass arises: The main form of mass we are familiar with--that of protons and neutrons and therefore of atoms--comes from the motion of quarks bound into protons and neutrons. The proton mass would be about what it is even without the Higgs field. The masses of the quarks themselves, however, and also the mass of the electron, are entirely caused by the Higgs field. Those masses would vanish without the Higgs. Last, but certainly not least, most of the amount of superpartner masses, and therefore the mass of the dark matter particle (if it is indeed the lightest superpartner), comes from additional interactions beyond the basic Higgs one. Finally, we consider an issue known as the family problem. Over the past half a century physicists have shown that the world we see, from people to flowers to stars, is constructed from just six particles: three matter particles (up quarks, down quarks and electrons), two force quanta (photons and gluons), and Higgs bosons--a remarkable and surprisingly simple description. Yet there are four more quarks, two more particles similar to the electron, and three neutrinos. All are very short-lived or barely interact with the other six particles. They can be classified into three families: up, down, electron neutrino, electron; charm, strange, muon neutrino, muon; and top, bottom, tau neutrino, tau. The particles in each family have interactions identical to those of the particles in other families. They differ only in that those in the second family are heavier than those in the first, and those in the third family are heavier still. Because these masses arise from interactions with the Higgs field, the particles must have different interactions with the Higgs field. Hence, the family problem has two parts: Why are there three families when it seems only one is needed to describe the world we see? Why do the families differ in mass and have the masses they do? Perhaps it is not obvious why physicists are astonished that nature contains three almost identical families even if one would do. It is because we want to fully understand the laws of nature and the basic particles and forces. We expect that every aspect of the basic laws is a necessary one. The goal is to have a theory in which all the particles and their mass ratios emerge inevitably, without making ad hoc assumptions about the values of the masses and without adjusting parameters. If having three families is essential, then it is a clue whose significance is currently not understood. Tying It All Together The standard model and the SSM can accommodate the observed family structure, but they cannot explain it. This is a strong statement. It is not that the SSM has not yet explained the family structure but that it cannot. For me, the most exciting aspect of string theory is not only that it may provide us with a quantum theory of all the forces but also that it may tell us what the elementary particles are and why there are three families. String theory seems able to address the question of why the interactions with the Higgs field differ among the families. In string theory, repeated families can occur, and they are not identical. Their differences are described by properties that do not affect the strong, weak, electromagnetic or gravitational forces but that do affect the interactions with Higgs fields, which fits with our having three families with different masses. Although string theorists have not yet fully solved the problem of having three families, the theory seems to have the right structure to provide a solution. String theory allows many different family structures, and so far no one knows why nature picks the one we observe rather than some other [see "The String Theory Landscape," by Raphael Bousso and Joseph Polchinski; Scientific American, September 2004]. Data on the quark and lepton masses and on their superpartner masses may provide major clues to teach us about string theory. One can now understand why it took so long historically to begin to understand mass. Without the Standard Model of particle physics and the development of quantum field theory to describe particles and their interactions, physicists could not even formulate the right questions. Whereas the origins and values of mass are not yet fully understood, it is likely that the framework needed to understand them is in place. Mass could not have been comprehended before theories such as the Standard Model and its supersymmetric extension and string theory existed. Whether they indeed provide the complete answer is not yet clear, but mass is now a routine research topic in particle physics. From checker at panix.com Tue Jul 12 19:30:07 2005 From: checker at panix.com (Premise Checker) Date: Tue, 12 Jul 2005 15:30:07 -0400 (EDT) Subject: [Paleopsych] NS: Alive! The race to create life from scratch Message-ID: Alive! The race to create life from scratch http://www.newscientist.com/article.ns?id=mg18524861.100&print=true * 12 February 2005 * Bob Holmes YOU might think Norman Packard is playing God. Or you might see him as the ultimate entrepreneur. As founder and CEO of Venice-based company ProtoLife, Packard is one of the leaders of an ambitious project that has in its sights the lofty goal of life itself. His team is attempting what no one else has done before: to create a new form of living being from non-living chemicals in the lab. Breathing the spark of life into inanimate matter was once regarded as a divine prerogative. But now several serious and well-funded research groups are working hard on doing it themselves. If one of them succeeds, the world will have met alien life just as surely as if we had encountered it on Mars or Europa. That first alien meeting will help scientists get a better handle on what life really is, how it began, what it means to be alive and even whether there are degrees of "aliveness". "We want to demonstrate what the heck life is by constructing it," says Packard's business partner and colleague Steen Rasmussen, a physicist at Los Alamos National Laboratory in New Mexico. "If we do that, we're going to have a very big party. The first team that does it is going to get the Nobel prize." Although the experiments are still in the earliest stages, some people, especially those with strong religious beliefs, feel uneasy at the thought of scientists taking on the role of creators. Others worry about safety - what if a synthetic life form escaped from the lab? How do we control the use of such technology? Finding a way to address these worries will have benefits beyond helping scientists answer the basic questions of life. The practical pay-offs of creations like Rasmussen's could be enormous. Synthetic life could be used to build living technologies: bespoke creatures that produce clean fuels or help heal injured bodies. The potential of synthetic organisms far outstrips what genetic engineering can accomplish today with conventional organisms such as bacteria. "The potential returns are very, very large - comparable to just about anything since the advent of technology," says Packard. And there is no doubt that there is big money to be made too. Only a few research groups have explicitly set themselves the goal of making a synthetic life form (see "Race for the ultimate prize" - bottom). Most are adapting bits and pieces from existing organisms. ProtoLife's plans are the most ambitious and radical of all. They focus on Rasmussen's brainchild, which he has nicknamed the Los Alamos Bug. Still but a gleam in its creator's eye, the Bug will be built up from first principles, using chemicals largely foreign to existing creatures. "You somehow have to forget everything you know about life," says Rasmussen. "What we have is the simplest we could dream up." To achieve this radical simplicity, Rasmussen and his colleagues had to begin with the most basic of questions: what is the least something must do to qualify as being alive? Biologists and philosophers struggled to answer that question for decades (New Scientist, 13 June 1998, p 38). However, most now agree that one key difference - perhaps the only one - between life and non-life is Darwinian evolution. For something to be alive, it has to be capable of leaving behind offspring whose characteristics can be refined by natural selection. That requires some sort of molecule to carry hereditary information, as well as some sort of process - elementary metabolism - for natural selection to act upon. Some kind of container is also needed to bind these two components together long enough for selection to do its work. Containment, heredity, metabolism; that's it in a nutshell. Put those together in the simplest way possible, and you've got the Los Alamos Bug. But every step is completely different from what we're used to (see graphic - the four stages shown are described further in "The Los Alamos Bug" - below). Take containment, for example. Terrestrial life is always water-based, essentially a watery gel of molecules enclosed within an oily membrane. Modern cells move nutrients across this membrane with the help of an array of different proteins embedded in the membrane. The Los Alamos Bug, however, is completely different. For a start it is oil-based, little more than a droplet of fatty acids. "Instead of having a bag with all the good stuff inside, think of having a piece of chewing gum," says Rasmussen. "Then you stick the metabolic molecules and genetic molecules into the chewing gum, so they are attached on the surface or sitting inside the chewing gum." The bare necessities The container is the easy part. The next step - heredity - is where most efforts to create synthetic life get bogged down. The challenge is to create a molecule complex enough to carry useful genetic information, which can also replicate. In modern organisms DNA has a whole army of enzymes to help it replicate its genetic information - far too complicated a process for the Bug. Instead, Rasmussen plans to use a molecule called peptide nucleic acid, or PNA. It uses the same "letters" of genetic code as DNA, but has two forms, one soluble only in fat, the other also attracted to water. Rasmussen hopes to put PNA's dual nature to use in a rudimentary form of replication (see Graphic). The Bug's metabolism has also been pared down to the minimum. The researchers plan to "feed" it with chemicals that can be converted into fatty acids. If enough are produced, the droplet will grow and divide into two. A similar metabolic process turns PNA precursors into functional PNA. Although most of the design is still on the drawing board or in the earliest stages of experimentation, the team has made most progress with the Bug's metabolism. "If you look at the individual pieces, they are all sort of demonstrated in the lab. But if you put everything together, not yet," says Liaohai Chen, a biochemist at Argonne National Laboratory near Chicago, who heads Rasmussen's experimental team. If all goes according to plan, these three components - container, genome and metabolism - should fit together to provide all the essentials for Darwinian evolution. In October 2004, Rasmussen landed a large grant from Los Alamos to begin making the Bug a reality. "I can't promise that we'll have it in three years, but I can guarantee that we'll have good progress," he says. The biggest problem may be coordinating the copying of the PNA and the metabolism of the fatty acid precursors so that replication of the genome proceeds at the same pace as the growth of the droplets. "Almost always when you put processes together there are cross-reactions, things that your theories won't tell you about." Life support Another fledgling research programme, known as Programmable Artificial Cell Evolution, or PACE, could provide the solution to this coordination challenge. Packard and Rasmussen are collaborating with PACE, which is focusing some of its attention on Rasmussen's design. A key idea behind PACE is to deliver precise amounts of particular chemicals to synthetic cells at specific places and times using computers to precisely control the flow of tiny amounts of chemicals. For example, a computer could use sensors to monitor the rates of PNA replication and fatty acid production in Rasmussen's experimental system, then deliver the correct amounts of each precursor. That would let researchers work out the kinks one by one in a controlled, programmable setting, providing something rather like a life-support machine that helps artificial cells through the critical steps towards becoming alive. "Once we have our hybrid unit, then we can successively withdraw the machine to approach a stand-alone cell," says John McCaskill, a chemist at Ruhr University in Bochum, Germany, who heads the PACE programme. In this way the PACE team plans eventually to evolve its way towards a self-supporting artificial cell. To do that, though, the team will need a way to recognise the system's first tentative steps down the pathway to life. But how do you recognise something faintly lifelike, when it looks nothing like the life we know? Look for the footprints of adaptation, says Mark Bedau, a philosopher who specialises in the boundary between life and non-life. Bedau is on leave from Reed College in Oregon to work with Packard at ProtoLife. If something is evolving then it should be generating adaptations - novel solutions to the problems of the world. And those new solutions, however subtle and incremental, become the foundation from which evolution takes its next steps. Adaptations which confer some advantage should last longer and spread faster than other variations. Bedau is developing statistical tests which will pick up these kinds of patterns in unfamiliar life forms. But since the PACE project has not yet begun lab experiments, he does not know whether the tests can detect the glimmerings of real life. However, he has road-tested them on a system that works in a similar way, namely human culture. In 2002, Bedau and colleague Andre Skusa sifted through more than five years of US patent records, counting the number of times each patent has been cited as a basis for later patents. They found that a few patents - such as the one enabling a web browser to display an ad while loading the main page - were cited far more often than one would expect if the differences found in the number of citations for inventions were random. These key innovations are the equivalent of biological adaptations such as opposable thumbs. "That gives you reason to think it should be possible to do the same kind of thing in chemical systems which are not yet alive but might be on the path to being alive," says Bedau. Using tests like these, the PACE team hopes to see its hybrid gradually become more and more lifelike. But at what point would it actually become alive? Perhaps at no particular point, says Bedau, who thinks it is quite possible that the living and the non- living are separated not by a clear, distinct line but by a wide grey area in which the Bug is partly but not totally alive. "There are shades of grey, and I imagine measuring how dark the grey is," he says. "Our conception of what life is will evolve as we learn more and acquire the ability to make things that are more and more alive." The moment when a blob of molecules becomes a fully living, evolving being is at least several years off. "Even our optimists wouldn't put a time horizon much sooner than 10 years for that kind of achievement," says Packard. Indeed, sceptics wonder whether the Los Alamos Bug and its ilk will ever yield anything useful. "It's certainly interesting from the conceptual point of view," says Pier Luigi Luisi, a biochemist at the University of Rome 3 and an expert on synthetic life. "But nature with nucleic acids and enzymes is so much smarter, because these are products that have been optimised over billions of years of evolution. To pretend to do life with simple chemistry is a nice ambitious idea, but it's probably not going to be very efficient." Still, if Packard, Rasmussen and their colleagues do someday succeed in creating synthetic life forms, they will have opened the door to a world of new possibilities. "We are breaking the last barriers between us and living technology," says Rasmussen. "That's going to be a very big thing. It's going to happen, no doubt about it." Among the most obvious payoffs could be organisms custom-designed to break down toxic compounds or produce useful chemicals such as hydrogen fuel. More conventional organisms can be genetically modified to do these tasks, but as Rasmussen points out, "the problem is these guys have evolved for billions of years. They're extremely versatile, and it's very difficult to keep them on task." An artificial organism, on the other hand, could in principle be built to do nothing but the task at hand, yet still have the evolutionary flexibility to adapt to changing conditions. Packard hopes that this controlled adaptability could lead to even greater things. He envisions living pharmaceuticals that deliver drugs to us in an intelligent, adaptive way, or diagnostic life forms that could roam our bodies collecting information and watching for signs of a problem. The ultimate goal would be machines that repair themselves as living beings do - even computers that can handle incredibly complex calculations while coping with inevitable errors, just as our bodies tolerate errors and failures within our hundreds of billions of cells. If life is all about the ability to evolve and adapt, then living technologies always have the potential to surprise us with unexpected new strategies that can take them beyond our control. But then again, that risk is nothing new. We already grapple with it when contemplating what would happen if robots or artificial intelligence were to get out of hand and in evaluating the safety of genetically modified fish, crossbred potatoes or even introduced rabbits. Indeed, for the foreseeable future, synthetic life probably poses much less of an escape risk, because the early versions, at least, will be so fragile and require so much life support. That means the safety of synthetic life is something to keep an eye on, not to be frightened of. "There isn't going to be some precipice we're going to fall over," says Bedau. "We'll be slowly inching our way down, and we'll have lots of opportunity to turn around." As well as concerns about safety, synthetic life raises some profound ethical and religious issues. "Just the fact that you're making life from scratch will give some people pause. They will think that's a prerogative that humans should never take," says Bedau. If humans can create life on their own, doesn't that remove one of the last deep mysteries of existence, in effect prying God's fingers from one of his last remaining levers to affect the world? Not necessarily, say theologians. "We are fully a part of nature, and as natural beings who are living and creating synthetic life, we are in a sense life creating more life, which is what's been going on in evolution for 4 billion years now," says John Haught, a Catholic theologian at Georgetown University in Washington DC. "And that does not in principle rule out that God would still be creating life using natural causes - namely us - which is the way in which theology understands God as always operating in the world." One thing seems certain; synthetic life will provide philosophers with plenty to chew on right from the start. Until now, efforts to come up with a good definition of life have been hampered by the fact that we are trying to generalise from just one example, the life that arose here on Earth. Having a second form, completely independent and based on different chemistry, should give a new perspective on this age-old question. And knowing what did or did not work in the lab, may also help us understand the origin of life - the first version, that is - on Earth. The Los Alamos Bug Containment This relies on the fact that oil and water do not mix. The components of each individual Bug are contained by a droplet of fatty acids, suspended in a watery solution enclosed by a test tube. Each fatty acid molecule has a negatively charged head which is attracted to water and which faces out into the watery environment, and a water-hating oily tail facing inward. Heredity Instead of DNA the Bug has short stretches of peptide nucleic acid, or PNA. Like DNA, PNA is made of two intertwining strands containing the genetic "letters" A, T, C and G. And like DNA, the sequences of letters on these stands complement each other. A pairs up with T and C pairs with G. The strands have a peptide backbone which does not carry an electrical charge, so will dissolve in fat. This means that the molecules of PNA prefer to face the inside of the fatty acid droplet, like crumbs embedded in the surface of a piece of chewing gum. This gives the molecule unusual mobility. In its usual double-stranded form, with its two peptide backbones facing outwards, a PNA molecule is completely fat-soluble, so it will sink into the oily centre of the Bug's droplet. But above some critical temperature, the two strands of the PNA double helix separate spontaneously. When this happens, the bases, which bear a slight charge, are exposed and attracted to the Bug's watery environment. So these single-stranded PNA molecules should then migrate to the edge of the droplet where the backbone can remain in the oil while the bases interact with the water outside. This mobility provides the handle needed to control replication. The plan is to supply the Bug with short bits of single-stranded PNA precursors, just half the length of its tiny genome. If a single-stranded PNA gene on the Bug's surface encounters two of these "nutrient" PNAs with the right base sequences, it will pair with them to form a double-stranded PNA molecule. This should then sink down into the droplet, where conditions favour the joining-up of the two "nutrient" fragments into a whole strand. Eventually, the double-stranded molecule will dissociate once again and its two strands drift back to the surface where each can pick up new partners - a rudimentary form of replication. Metabolism The third essential part of the Bug's life - metabolism - has also been pared to its barest minimum. The researchers plan to "feed" the Bug with fatty acid precursors. These will have photosensitive molecules attached their charged "head" ends. These photosensitive caps mask the charged head, making the molecules completely fat soluble. This means they will tend to collect within the Bug's droplets. When light strikes the photosensitive cap, it breaks off, exposing the negatively charged fatty acid head, which migrates back to the surface of the droplet. Eventually, so many new fatty acids will be produced that they will not all fit on the surface and the droplet will split in two to create a larger surface area. The Bug will also be supplied with inactive PNA precursors bound to a photosensitive molecule. Once again, when light strikes this photosensitiser, it breaks off to release the active PNA fragment. Effective metabolism also requires one more step to prevent the photosensitive molecule, once broken off, from re-sticking to the fatty acid or PNA and so deactivating it once again. The PNA genetic material prevents this by acting as a rudimentary wire, conducting electrons to neutralise the photosensitiser. In this way, the Bug's "genome" plays an active role in the metabolic process. Evolution If all goes according to plan, these three components - container, genome, metabolism - should fit together to provide all the essentials for Darwinian evolution. As the Bugs grow and reproduce, corralled in a test tube, natural selection should favour PNA base sequences that pair up and split off fastest, and also conduct electrons most efficiently to the photosensitisers. Synthetic slaves Artificial organisms could be custom-built for particular tasks: break down toxic compounds produce useful chemicals such as hydrogen fuel act as "living pharmaceuticals", delivering drugs in the body in an adaptive way be tiny diagnosticians, roaming our bodies, collecting information and checking for problems become part of machines that can repair themselves as living beings do Race for the ultimate prize THE Los Alamos Bug has some stiff competition in the race to be the first artificial life form, especially since some of the entrants are taking much more conventional routes to that goal. At the Institute for Biological Energy Alternatives in Rockville, Maryland, Craig Venter, leader of the private group that sequenced the human genome, and colleague Hamilton Smith are trying to create a new life form by extracting the genome from an existing bacterium and replacing it with a synthetic genome stripped down to a bare minimum of genes (New Scientist, 31 May 2003, p 28). Because this approach leaves most of the cell's machinery intact, Venter's team is widely expected to be the first to succeed, perhaps within a few months or years. (Uncharacteristically, Venter is not talking to the press about this project.) However, Venter's new organism will end up looking very much like existing life. And at the University of Rome 3, Pier Luigi Luisi is working on the "minimal cell project". Starting with a simple membrane-bound vesicle, Luisi's team plans to gradually add in off-the-shelf enzymes and other cellular components until they assemble the simplest possible working cell. Across the Atlantic at Harvard University, Jack Szostak has been working on a synthetic life form just as simple as Rasmussen's Los Alamos Bug, but using more familiar chemistry. Szostak's design calls for a tiny membrane-bound vesicle containing little more than an RNA or RNA-like molecule with a special talent: that of catalysing its own replication. The problem is that no one has yet developed an RNA capable of replicating more than just a small part of itself. Szostak predicts success is probably 10 or 20 years off. "I've been saying that for the last 10 or 20 years," he says, "and it's still true." From checker at panix.com Tue Jul 12 19:30:24 2005 From: checker at panix.com (Premise Checker) Date: Tue, 12 Jul 2005 15:30:24 -0400 (EDT) Subject: [Paleopsych] NYT: Neuron Network Goes Awry, and Brain Becomes an IPod Message-ID: Neuron Network Goes Awry, and Brain Becomes an IPod http://www.nytimes.com/2005/07/12/health/psychology/12musi.html By [3]CARL ZIMMER Seven years ago Reginald King was lying in a hospital bed recovering from bypass surgery when he first heard the music. It began with a pop tune, and others followed. Mr. King heard everything from cabaret songs to Christmas carols. "I asked the nurses if they could hear the music, and they said no," said Mr. King, a retired sales manager in Cardiff, Wales. "I got so frustrated," he said. "They didn't know what I was talking about and said it must be something wrong with my head. And it's been like that ever since." Each day, the music returns. "They're all songs I've heard during my lifetime," said Mr. King, 83. "One would come on, and then it would run into another one, and that's how it goes on in my head. It's driving me bonkers, to be quite honest." Last year, Mr. King was referred to Dr. Victor Aziz, a psychiatrist at St. Cadoc's Hospital in Wales. Dr. Aziz explained to him that there was a name for his experience: musical hallucinations. Dr. Aziz belongs to a small circle of psychiatrists and neurologists who are investigating this condition. They suspect that the hallucinations experienced by Mr. King and others are a result of malfunctioning brain networks that normally allow us to perceive music. They also suspect that many cases of musical hallucinations go undiagnosed. "You just need to look for it," Dr. Aziz said. And based on his studies of the hallucinations, he suspects that in the next few decades, they will be far more common. Musical hallucinations were invading people's minds long before they were recognized as a medical condition. "Plenty of musical composers have had musical hallucinations," Dr. Aziz said. Toward the end of his life, for instance, Robert Schumann wrote down the music he hallucinated; legend has it that he said he was taking dictation from Schubert's ghost. While doctors have known about musical hallucinations for over a century, they have rarely studied it systematically. That has changed in recent years. In the July issue of the journal Psychopathology, Dr. Aziz and his colleague Dr. Nick Warner will publish an analysis of 30 cases of musical hallucination they have seen over 15 years in South Wales. It is the largest case-series ever published for musical hallucinations. "We were trying to collect as much information about their day-to-day lives as we could," Dr. Aziz said. "We were asking a lot of the questions that weren't answered in previous research. What do they hear, for example? Is it nearby or is it at a long distance?" Dr. Aziz and Dr. Warner found that in two-thirds of the cases, musical hallucinations were the only mental disturbance experienced by the patients. A third were deaf or hard of hearing. Women tended to suffer musical hallucinations more than men, and the average patient was 78 years old. Mr. King's experience was typical for people experiencing musical hallucinations. Patients reported hearing a wide variety of songs, among them "Don't Cry for Me Argentina" and "Three Blind Mice." In two-thirds of the cases, the music was religious; six people reporting hearing the hymn "Abide With Me." Dr. Aziz believes that people tend to hear songs they have heard repeatedly or that are emotionally significant to them. "There is a meaning behind these things," he said. His study also shows that these hallucinations are different from the auditory hallucinations of people with schizophrenia. Such people often hear inner voices. Patients like Mr. King hear only music. The results support recent work by neuroscientists indicating that our brains use special networks of neurons to perceive music. When sounds first enter the brain, they activate a region near the ears called the primary auditory cortex that starts processing sounds at their most basic level. The auditory cortex then passes on signals of its own to other regions, which can recognize more complex features of music, like rhythm, key changes and melody. Neuroscientists have been able to identify some of these regions with brain scans, and to compare the way people respond to musical and nonmusical sounds. Only a handful of brain scans have been made of people with musical hallucinations. Dr. Tim Griffiths, a neurologist at the University of Newcastle Upon Tyne in England, performed one of these studies on six elderly patients who developed musical hallucinations after becoming partly deaf. Dr. Griffiths used a scanning technique known as PET, which involves injecting radioactive markers into the bloodstream. Each time he scanned his subjects' brains, he asked them whether they had experienced musical hallucinations. If they had, he asked them to rate the intensity on a scale from one to seven. Dr. Griffiths discovered a network of regions in the brain that became more active as the hallucinations became more intense. "What strikes me is that you see a very similar pattern in normal people who are listening to music," he said. The main difference is that musical hallucinations don't activate the primary auditory cortex, the first stop for sound in the brain. When Dr. Griffith's subjects hallucinated, they used only the parts of the brain that are responsible for turning simple sounds into complex music. These music-processing regions may be continually looking for signals in the brain that they can interpret, Dr. Griffiths suggested. When no sound is coming from the ears, the brain may still generate occasional, random impulses that the music-processing regions interpret as sound. They then try to match these impulses to memories of music, turning a few notes into a familiar melody. For most people, these spontaneous signals may produce nothing more than a song that is hard to get out of the head. But the constant stream of information coming in from the ears suppresses the false music. Dr. Griffith proposes that deafness cuts off this information stream. And in a few deaf people the music-seeking circuits go into overdrive. They hear music all the time, and not just the vague murmurs of a stuck tune. It becomes as real as any normal perception. "What we're seeing is an amplification of a normal mechanism that's in everyone," Dr. Griffiths said. It is also possible for people who are not deaf to experience musical hallucinations. Epileptic seizures, certain medications and Lyme disease are a few of the factors that may set them off. Dr. Aziz also noted that two-thirds of his subjects were living alone, and thus were not getting much stimulation. One patient experienced fewer musical hallucinations when Dr. Aziz had her put in a nursing home, he said, "because then she was talking to people, she was active." There is no standard procedure for treating musical hallucinations. Some doctors try antipsychotic drugs, and some use cognitive behavioral therapy to help patients understand what's going on in their brains. "Sometimes simple things can be the cure," Dr. Aziz said. "Turning on the radio may be more important than giving medication." Despite these treatments, many people with musical hallucinations find little relief. "I'm just living with it," Mr. King said. "I wish there was something I could do. "I do silly things like talking to myself, hoping that when I stop talking, the tune will stop. But it doesn't work that way." More studies may help researchers find new treatments. Prof. Diana Deutsch, a psychologist at the University of California, San Diego, is planning a new scanning study of musical hallucination on people who are not deaf, using functional M.R.I. Unlike the PET scanning used by Dr. Griffiths, functional M.R.I. is powerful enough to catch second-by-second changes in brain activity. "It might be awhile before we have results, but it's certainly something I'm very excited about," Dr. Deutsch said. "We'll see where it takes us." Dr. Aziz also believes that it is necessary to get a better sense of how many people hear musical hallucinations. Like Mr. King, many people have had their experiences dismissed by doctors. Dr. Aziz said that ever since he began presenting his results at medical conferences last year, a growing number of patients have been referred to him. "In 15 years I got 30 patients," he said, "and in less than a year I've had 5. It just tells you people are more aware of it." Dr. Aziz suspects that musical hallucinations will become more common in the future. People today are awash in music from radios, televisions, elevators and supermarkets. It is possible that the pervasiveness of music may lead to more hallucinations. The types of hallucinations may also change as people experience different kinds of songs. "We have speculated that people will hear more pop and classical music than they do now," said Dr. Aziz. "I hope I live long enough to find out myself in 20 years' time." From checker at panix.com Wed Jul 13 22:09:58 2005 From: checker at panix.com (Premise Checker) Date: Wed, 13 Jul 2005 18:09:58 -0400 (EDT) Subject: [Paleopsych] Sunday Herald: The Greatest Thinker in the World ... Ever Message-ID: The Greatest Thinker in the World ... Ever http://www.sundayherald.com/print50415 Sunday Herald - 26 June 2005 [I might actually go along with this, though Dave Aristotle may have him a lot of competition.] Although he lacked the soundbites of Marx and the attitude of Sartre, David Hume should be recognised as the finest philosopher of all time By Julian Baggini _________________________________________________________________ PEOPLE of Scotland, it is more than your patriotic duty to help crown your 18th century countryman, David Hume, as the greatest philosopher of all time. For once, naked nationalism and good rational sense both lead us to same conclusion: among all great thinkers, Hume reigns supreme. And, lest misplaced patriotisim is suspected, I say this as someone who is no more Scottish than the Duke of Edinburgh. Radio 4's In Our Time programme is currently conducting a poll to determine the world's greatest philosopher, and although its presenter, Melvyn Bragg, has let it slip that Marx is the early leader, inside sources tell me Hume is hot on his heels. So there is still time to win the day for Scotland's finest mind. That Hume is even a contender is testimony to the strength of his philosophy and the intelligence of the voters, since he lacks all the necessary requisites of a popular hero. Marx has the advantage of some seriously memorable soundbites: "Religion is the opiate of the masses"; "From each according to his abilities, to each according to his needs", and "Either this man is dead or my watch has stopped."(Admittedly, that last one is by Groucho, not Karl.) Hume's most famous quotes, in contrast, are completely baffling to the uninitiated. There is wisdom in his saying: "Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger." But you'd be forgiven for not spotting it. Jean-Paul Sartre reaps the benefits of his cool image. Whether historically accurate or not, there is a definite romance to the Left Bank caf?s, the Gauloise cigarettes, the black polonecks and all that intense talk of despair and freedom. Hume, on the other hand, played billiards in drawing rooms and loved his mum. The mystique of Kierkegaard and Camus is heightened by their young and tragic deaths. Hume passed away aged 65 of intestinal cancer, cheerful and in good humour. That's really no way to start a posthumous personality cult. Indeed, the average person in the street knows little more about the man - except, perhaps, that "David Hume could out-consume Schopenhauer and Hegel", as the Monty Python song insisted. And yet Hume has endured, hailed by many as the greatest British philosopher. Can we go further and say he is the greatest philosopher, full stop? I think we can, not least because Hume's whole approach to philosophy is needed even more now than it was in his time. Hume was born in Edinburgh in 1711, in the infancy of both the Enlightenment and the union of Scotland and England into Great Britain. Scottish philosophy was being transformed by the success of science, which was based not on abstract theory, but empirical observation of how the world actually works. Suddenly, the theoretical speculations of Continental thinkers such as Descartes and Spinoza seemed hopelessly detached from the real world they sought to explain. Philosophy had to be made natural, its reasoning rooted in experience. Hume was just one of many who helped take philosophy along this new path. However, it was also a deeply uncertain one in which the threat of scepticism was ever present. Gone were the dreams of Plato and Descartes of a philosophy beyond doubt. In its place came the need to learn how to live with doubt without being consumed by it. Hume's unique genius was to show how this could be done. Hume practised what he preached. Although when in the midst of his philosophical deliberations he was often perturbed by their sceptical implications, these worries soon dissolved when he rejoined human company and had a game of billiards. This may seem shallow, but it is in fact a mature recognition that those who claim to be nihilists are just posturing: nobody really believes in nothing. The lessons he taught are desperately relevant today, when certainty is only found in religious fundamentalism, yet uncertainty risks a descent into postmodern relativism and intellectual anarchy. In this climate, how do we resolve ethical disputes such as those that rage over stem-cell research, euthanasia and civil liberties versus civic security? How can we trust science when it gets so many things wrong? How do we resolve the great ideological clashes of East and West when there are no unquestionable fundamentals upon which to build agreement? What we need is a Humean approach to provide the intellectual ballast necessary to stay afloat in a sea of uncertainty. Consider the question of ethical values. Hume agreed with moral sceptics on several key points. He did not believe it was possible to establish absolute moral values . Religion could certainly not provide these, for there is simply no way we can trust the authority of religious texts or leaders. Nothing can be true or false because a religion says it is, but only because we have good reasons to believe it is true or false. In a world in which there are so many different religions and denominations, all claiming different things, Hume's scepticism seems wiser than ever. If we are to accept the guidance of one religion over an other, we need reasons. "Trust me, I'm a cleric" is not a good one, not least because for every bishop saying that homosexuality is perfectly acceptable, there is another claiming that sodomites will burn in hell for their sins. Nor can moral values be established by pure reason. Hume referred to the kinds of truths which could be proven by rationality alone as "matters concerning the relation of ideas", once again demonstrating his uncanny knack of failing to coin a catchy phrase. One example is mathematics. It is because of what the numbers and symbols mean that two plus two must equal four. Similarly, you don't need to conduct a survey of bachelors to know that they are all unmarried men. Hume thought it obvious that moral matters do not fall into this category. You cannot know that Asbos are an unacceptable limit on civil liberties just by attending to what those words mean. Nor can you resolve a dispute between those who think a war is justified and those who do not, simply by determining the meanings of the terms "justified" and "war". Moral debate is not like mathematics, and so disagreements cannot be resolved by pure theory. So neither religion nor reason can establish moral certainties. Does that mean we are then condemned to a kind of moral free-for-all, in which what is right for you may not be right for me, and nobody is entitled to criticise anyone else's ethics? Some find this view surprisingly attractive, since it is supremely tolerant. But when push comes to shove we know that absolute toleration is abhorrent. The killings in Darfur are not alright for the Sudanese victims. Anti-war protesters do not think the invasion of Iraq was right for Bush and Blair and wrong for them - they think the war was just wrong. Fortunately, Hume's view does not lead us to moral anarchy. Besides religion and pure reason, there is another route to knowledge. Questions concerning matters of fact are settled by looking at how the world actually works. So, if you want to know at what temperature water boils, you have to conduct experiments to find out. Sitting in your armchair contemplating the meaning of "water" and "boil" will not help. Crucially, however, matters of fact are never proven beyond all possible doubt. You have to accept that science is less than certain, but that, nonetheless, it is more reliable than, say, superstition. Whereas previous philosophers demanded certainty, Hume tried to grade degrees of uncertainty. Clearly, however, moral principles are much less certain than the laws of physics. Right and wrong cannot be observed and measured like energy or mass. Rather, the facts of morality are to be observed in human feeling and compassion. When we say that torture is wrong, for example, we are not identifying a feature of torture itself, but expressing something of our reaction to it. What is more, these feelings are somehow natural for human beings. Empathy is a human universal, and this is what enables people to agree about what is good and bad. Feelings may be affected by upbringing, society and reasoning, but are not simply products of any one of these. Hence the curious phrase: "Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger." In other words, it is not rational argument that makes us recoil from the idea of destroying the whole world, but human fellow feeling. Hume's strategy for resolving today's moral dilemmas would be to start by showing how we cannot accept any absolute principles dictated by religious leaders. Then he would show how any moral principles held to be self-evident or proven are no such thing. Purged of all bogus absolutes, we would then begin the process of identifying the common humane impulses that morally motivate us and using our reason to negotiate our way through the contradictions and complexities that emerge. This is pretty much how modern ethics committees proceed. They cannot make their starting points absolutes, since not everyone will agree with them. Rather, they need to build from what unites us. Hume's genius was his ability to combine a ruthless intellect that revealed the limitations of our understanding with the wisdom to see we can move forward with the meagre intellectual resources available to us. That's why Hume is above fashion and doesn't need a dramatic life, a romantic death or clever slogans in order to endure. A vote for Hume is a vote for the only philosopher who is able to defeat the scepticism of our time without dogmatism. Vote for the greatest philosopher at bbc.co.uk/radio4/history/inourtime Julian Baggini's latest book, The Pig That Wants To Be Eaten And 99 Other Thought Experiments, is published next month by Granta. He appears at the Edinburgh International Book Festival on August 23 From checker at panix.com Wed Jul 13 22:10:06 2005 From: checker at panix.com (Premise Checker) Date: Wed, 13 Jul 2005 18:10:06 -0400 (EDT) Subject: [Paleopsych] The New Atlantis: Charles T. Rubin: Daedalus and Icarus Revisited Message-ID: Charles T. Rubin: Daedalus and Icarus Revisited http://www.thenewatlantis.com/archive/8/rubin.htm The New Atlantis, Number 8, Spring 2005, pp. 73-91. Doubts about the goodness of scientific and technological progress are hardly new, and fears about the dangers of human knowledge existed long before it became plausible to worry that the fate of the entire world might be in peril. The physicist Freeman Dyson offers one commonand very modernway of describing our predicament: Progress of science is destined to bring enormous confusion and misery to mankind unless it is accompanied by progress in ethics. In other words, we need some novel ethic to match our technological ingenuity. But progress in ethics might also mean what Abraham Lincoln had in mind when describing the principles of the Declaration of Independence as a standard maxim for free society ... constantly looked to, constantly labored for, and even though never perfectly attained, constantly approximated. Dysons idea suggests new ideals replacing old ones as history moves technologically forward; Lincolns idea suggests more permanent human aspirations that serve as the measure of different ages. Either meaning poses very serious challenges. Genuinely novel ethics are not always genuine improvements, while many anciently articulated ethical goals remain elusive. The ambiguity in the meaning of moral progress is at the heart of a 1923 debate between biochemist J. B. S. Haldane and logician Bertrand Russell, two of the greatest and most argumentative public intellectuals of twentieth-century Britain. Haldane, who would go on to an extremely distinguished career as a biochemist and geneticist, spoke under the auspices of the Cambridge Heretics discussion club. Russell, already a famous philosopher, answered him as part of a speakers series sponsored by the Fabian Society under the general title, Is Civilization Decaying? The published version of Haldanes remarks created no little controversy; even Albert Einstein had a copy in his library. There is also little question that Haldanes work influenced two of the greatest British critics of scientific and technological progress: Julian Huxley and C. S. Lewis. The titles of the essays, Haldane using Daedalus and Russell Icarus, support the common idea that Haldane writes as an advocate of progress and Russell as a skeptic. While this view is understandable, it is hardly exhaustive. Haldane freely highlights horrible possibilities for the future, and he is quite blunt about the socially problematic character of scientific research and scientists. Russell, on the other hand, can imagine circumstances (albeit unlikely ones) where the power of science could be ethically or socially constrained. The real argument is about the meaning of and prospects for moral progress, a debate as relevant today as it was then. Haldane believed that morality must (and will) adapt to novel material conditions of life by developing novel ideals. Russell feared for the future because he doubted the ability of human beings to generate sufficient kindliness to employ the great powers unleashed by modern science to socially good ends. Both authors explore the problem of relating moral and technological progress with sufficient depth that we would benefit by reexamining this debate with a view to our own time. But the manner in which they frame the problem stands in the way of articulating a clear moral goal that might serve as progresss purpose and judge. With serious ethical discussion thus sidelined, technological change itself becomes the fundamental imperative, despite the reasonable doubts both Haldane and Russell have concerning its ultimate consequences. And while Haldane is more loath to acknowledge it than Russell, the net result of their debate is a tragic view of mankinds future, marked by an irreconcilable and destructive mismatch between our aspiration to understand nature and the power we gain from that knowledge. In the Image of Science Haldane begins Daedalus with a directness that does not characterize most of the essay that follows. Drawing on scenes of destruction from World War I and from casual discussion of the possible reasons for exploding stars, he asks whether the progress of science will culminate in the complete destruction of humanity or in the reduction of human life to an appendage of machines. Perhaps a survey of the present trend of science may throw some light on these questions. It is already revealing that Haldane gives this kind of scientific projection such a privileged place, for it suggests that in his mind the primary question behind the destruction of mankind is simply whether science will gain the power to accomplish it. If the central issue of our future is the power to destroy ourselves, then the most obvious way of avoiding that risk is preventing mankind from gaining that power in the first place. Yet Haldane sees no realistic chance of stopping the progress of science. He argues that believing in the future might strangely require a willingness to see all that we know destroyed and replaced. Even if we can avert apocalyptic disaster, we will remake ourselves in unrecognizable ways. Haldane believes that biology is likely to become the center of scientific interest in the future, and this is where the bulk of his essay is focused. But he digresses to discuss the situation in physics, which is in a state of profound suspense ... primarily due to Einstein, the greatest Jew since Jesus. Avoiding an inevitably technical discussion of physical theory, he decides instead to speculate on the practical consequences of Einsteins discovery. In so doing, he provides a preview of the logic that will inform his entire essay. Einstein heralds the end of the era of Newtonian physics, whose concomitant working metaphysic was materialism. This scientific revolution means the coming of a new metaphysical and moral order, and Haldane predicts that Einsteins work will bring with it a triumph of Kantian idealism (although he admits that he does not know exactly what this change will mean in practice). He projects further that some centuries hence physiology will invade and destroy mathematical physics. Overall, we are working towards a condition when any two persons on earth will be able to be completely present to one another in not more than 1/24 of a second.... Developments in this direction are tending to bring mankind more and more together, to render life more and more complex, artificial and rich in possibilitiesto increase indefinitely mans powers for good and evil. This statement is an answer of sorts to the original question: Will man survive, and what will he be like? Haldanes answer hardly seems like much of an advance over where the essay began: Self-destruction, he suggests, is a genuine possibility as we increase indefinitely mans powers for good and evil. But in fact, Haldane has laid out two crucial elements of his larger argument. First, there is the implicit definition of progress: bringing mankind closer together, increased complexity, artificiality, and open-endedness. We will see how this view culminates in his picture of a united humanity working to transcend itself, and in his turn to evolution as a form of salvation. Second, as Haldane understands the world, scientific discovery brings with it a horizon of belief that sets the parameters of daily life. While Haldane will speak of labor and capital as our masters, his essay attempts to show how it is really the scientists, the Daedaluses of the world, who discover new ways of seeing and doing, and at a far deeper level are in control. This point is reiterated in yet another digression on the decay of certain arts, which Haldane describes as a consequence of artists not understanding the scientific and industrial order in which they live. This view of sciences role in setting the agenda for human life has crucial consequences for the ethical question that is supposed to be the motive force behind the essay. If science shapes the parameters of human aspiration and human virtue, then morality is simply an effort to respond to mans ever-increasing and ever-changing power over nature. We judge ourselves in the image of science, not science in the image of some transcendent idea of the human good. The Malleability of Morals When the main topic of the essayadvances in biologyis taken up, the subject is again introduced with a digression. To foretell the impact of future development in biology, Haldane looks at four biological inventions of the past to see the nature of their consequences. Three inventions are stated directly: domestication of animals, domestication of plants, and production of alcohol. A fourth is only hinted at, involving an unspecified invention that focused male sexual attention on the female face and breasts rather than buttocks. Haldane also mentions the invention of bactericide and birth control. These biological inventions have two common characteristics. First, they have had a profound emotional and ethical effect on human life. Second, the biological invention tends to begin as a perversion and end as a ritual supported by unquestioned beliefs and prejudices. Haldane asks us to consider the radical indecency that milk drinking introduces into our relationship to the cow, or the process of corruption which yields our wine and beer. Any innovator who would suggest such disgusting things would clearly at first be considered outside the bounds of civilization. But civilization adjusts. In a typical bit of satire, Haldane wonders what strange god will have the hardihood to adopt Charles Bradlaugh and Annie Besant, tireless workers for birth control and other secular causes of the nineteenth century. Haldane takes the figure of Daedalus as instructive about the changing status of beliefs. Daedalus had no care for the gods, and the gods failed to punish him even for so monstrous an act as breeding a woman with a bull. He was the first to demonstrate that the scientific worker is not concerned with gods, and thus he exposed himself to the universal and agelong reprobation of humanitywith the exception of Socrates, who was proud to claim him as an ancestor. The point here is ambiguous. If there is ongoing disapproval of Daedalus, then Haldanes case that mankind adjusts its ideals to its technologies seems questionable. Yet insofar as the West is heir to Socratic rationalism, it is somehow also heir to Daedalus. Haldane tries to clarify his argument that yesterdays perversions become todays unquestioned beliefs by presenting the bulk of his projections about biology in the form of an essay from 150 years hence, written by a rather stupid undergraduate reviewing the progress made in this period. The student presents the most remarkable achievementsa global food glut, the transformation of the color of the ocean to purple due to the same microorganism that created the food glut, the elimination of deserts, ectogenic children, and genetic engineeringin a deeply matter of fact and unreflective way. This is his world, and while intellectually he understands it has not always been so, he is reasonably content with the way things are. Haldane follows this mock essay with his own speculations on birth control, eugenics, behavior control, the abolition of disease and old age, and the transformation of death into a physiological event like sleep, shorn of its emotional terrors. In arguing that we adjust our ethics to our inventions, Haldane exploits two truths about human life: over time, many ideas of right and wrong do change in response to changed circumstances, and most people do have a fairly thoughtless understanding of the sources of the ideas of right and wrong that inform their moral horizons. But Haldane draws too much from these observations, because he fails to connect them in any way. He neglects to think about the possibility that greater reflection on moral principles might lead to less malleability. Socrates, after all, proceeded in his investigations by holding open the possibility that opinion could be distinguished from truth, even in moral matters. For his most ancient examples, the truth of the ethical transformation Haldane describes is so shrouded in myth and mystery that we cannot say anything with certainty. Haldane does not even attempt to produce evidence of a period of revulsion concerning milk, alcohol, or the female face. He is on more solid ground with the cases of sanitation and birth control. But the growing acceptance of both, in the face of what Haldane would see as mere traditionally minded opposition, tells us nothing in and of itself. We would need to examine, for example, whether opposition to cleanliness was any more or less defensible in its moral claims than opposition to birth control. Since Haldane does not find it necessary to reflect on this point, he leaves himself open to the charge of holding an unreflective and dogmatic belief in ethical relativism, which from the start transforms all moral claims into cultural prejudices. Indeed, when Haldane speaks in his own voice about what the future holds, he notes that I am Victorian enough in my sympathies to hope that after all family life, for example, may be spared, even as it becomes unnecessary for women to bear children. His only imaginable response to the abolition of the family is rooted in emotions trained by the mores of a particular time and place. At this point in the essay, it appears that Haldane can provide no assurances that scientific progress will not lead to our demise. In fact, that demise might be brought on by the way changes wrought by science create new moral desideratanew norms that adjust our expectations to things that we once saw as evil, blinding us to a self-destructive course. And even if science does not lead to our demise, a man of the past looking into the future is unlikely to see what he would call progress strictly speaking; he is likely instead to see horrifying change and a generation that complacently accepts indecency. This part of Haldanes essay culminates with the observation that the conservative has but little to fear from the man whose reason is the servant of his passion, but let him beware of him in whom reason has become the greatest and most terrible of the passions. These are the wreckers of outworn empires and civilizations, doubters, disintegrators, deicides. This free-spirited view of human affairs might be tolerable if one were confident that something better would be built on the wreckage of the old. But on Haldanes own understanding, as presented so far, no such claim can withstand the fierce gaze of the reasonable man. So it may come as no surprise that Haldane tries to shift somewhat the ground of his argument. Might Makes Right This shift begins with Haldanes argument that science should be seen from three points of view: First, it is the free activity of mans divine faculties of reason and imagination. Second, it is the answer of the few to the demands of the many for wealth, comfort and victory. Haldane legitimately reminds us of the bargain on which modern natural science rests, which allows the free activity of science for the sake of the benefits it produces. (Of course, if those benefits are inherently double-edged, one might reconsider the terms of the original bargain.) Third, science is mans gradual conquest, first of space and time, then of matter as such, then of his own body and those of other living beings, and finally the subjugation of the dark and evil elements of his own soul. These conquests, Haldane acknowledges, will never be complete but they will be progressive. And the question of what he [mankind] will do with these powers is essentially a question for religion and aesthetic. This last point is breathtaking, as Haldane seems to understand. For what are the dark and evil aspects of the soul that require conquest? Not, apparently, the passion of unadulterated reason; not the urge to destroy civilizations or commit deicide; not the urge to murder a rival or satisfy a monstrous lust. Not, alas, if Daedalus is to remain a model to be admired. And how do religion and aesthetic suddenly rise to such a prominent place in shaping mans fate, or is their impotence in the face of scientific advance precisely the point? For Haldane acknowledges that the scientific powers now being given to mankind are like giving a baby a box of matches; we seem to possess the power of gods and the wisdom of infants. How can we expect this all to turn out well? In what sense can we call the conquest of nature and of the human soul progressive? Haldanes hope is that the tendency of applied science is to magnify injustices until they become too intolerable to be borne, and the average man whom all the prophets and poets could not move, turns at last and extinguishes the evil at its source. But with the impotence of religion and aesthetic already confirmed, we are left to wonder what Haldane means by injustice, or by what standard evil will be recognized and judged. To clarify what he means, Haldane offers the example of war. By making mankind more powerful, science has created the reductio ad absurdum of modern warfare, and thus created the circumstances that make world government more possible, since it is the only vehicle that might stop apocalyptic self-destruction. (He wrote this essay, remember, in the wake of what was then historys bloodiest war and at a time when the League of Nations still seemed to hold promise.) As Haldane puts it: Moral progress is so difficult that I think any developments are to be welcomed which present it as the naked alternative to destruction, no matter how horrible may be the stimulus which is necessary before man will take the moral step in question. Our moral future thus depends on flirting with the technological brink, which we seem destined to do whether we like it or not. Haldane seems to believe that science first pushes society to become more just according to the local standard of justice (the scientific worker is brought up with the moral values of his neighbors). But then science, by increasing our power and changing our circumstances, helps to destroy that standard (an alteration of the scale of human power will render actions bad which were formerly good). So at the very moment that society is forced to become more just, it is on the way to becoming more outworn. When Haldane concludes that the prospect for humanity is hopeful if mankind can adjust its morality to its powers, he means that progress can only in the most limited sense be seen as the achievement of what was ineffectively advocated by prophets and poets. His effort to soften his teaching on sciences power of moral destruction fails; progress is not the realization of old ideals but the necessary birth of new ones. It is just because even the least dogmatic of religions tends to associate itself with some kind of unalterable moral tradition, that there can be no truce between science and religion. Haldane eventually returns to what is central in his essay: the influence of the man for whom reason has become the greatest and most terrible of the passions. The essay concludes with a poetic evocation of the lonely figure of Daedalus, conscious and proud of his ghastly mission, Singing my song of deicides. From this point of view, moral progress would mean adopting the view that mythology and morals are provisional or situationalwith Daedalus creating the situations. In effect, Haldane transforms might makes right into the hallmark of moral progressan odd but deeply telling conclusion for an essay that has come to be seen as an optimistic assessment of the future of science. Why does Haldane fail to appreciate this result? One reason is clearly his romantic image of the scientist as a crusader for truth without regard to consequences, and another reason is the need to free the scientist to work unmolested despite all the acknowledged problematic consequences of doing so. But more deeply, this moral concession to scientific might is perhaps obscured for Haldane by his understanding of the evolving character of scientific powerthat is, by his idea of the gradual conquest, first of space and time, then of matter as such, then of his own body and other living beings, and finally the subjugation of the dark and evil elements of his own soul. Part of what Haldane has in mind by this growing, but always incomplete, process of conquest is evident both in his look backward at past discoveries and his look forward at future possibilities. By looking to both past and future, he is attempting to overcome our prosaic acceptance of current abilities, to highlight how remarkable they would look from the perspective of the past, and how we might be similarly impressed (or na?vely horrified) by what the future will make possible. He wants us to be awed by what human beings can achieve through our divine faculties of reason and imagination, and so to believe in the self-transcending possibility of self-directed evolution. By realizing the temporary character and utter foreignness of the human past, we might put our faith in a post-human future. Inventing the Future This post-human project comes out even more clearly in Haldanes story, The Last Judgment, where he attempts to look forty million years into the future of mankind. In this vision of the future, mans use of tidal power changes the orbit of the moon, drawing it close enough to be destroyed and to destroy all life on Earth. In the meantime, mankind makes multiple efforts to reach, colonize, and terraform Venus, taking half a million years to achieve the first successful landing. Realizing the hostile conditions for life on Venus, a group of men set out to restart evolution; for by then, natural selection had been stopped and mankind had reached a state of happy equilibrium indistinguishable from utter stagnation. Confronted once more with an ideal as high as that of religion but more rational, a task as concrete as but infinitely greater than that of the patriot, man became once more capable of self-transcendence. After only ten thousand years, a genetically engineered offshoot of humanity is created, at odds with its environment, hence driven and unhappy, hence a being that can survive on Venus. These early settlers develop into a superorganism of individuals mentally linked to one another, and they prepare a race capable of colonizing the outer planets. Read in conjunction with Daedalus, the story illustrates Haldanes view of the consequences of our increased scientific and technological powers: on the one hand, destroying Earth and all human life, and on the other hand, self-consciously directing human evolution into a form that can thrive elsewhere. The noble goal of self-transcendence does not produce happiness, but happiness means stagnation. Haldane was familiar enough with the work of H.G. Wells to anticipate the likely reaction to such a story. In its own time, it fires the imagination, and hence serves the authors purpose: to inspire people to look to the future for guidance rather than the past. Seen in retrospect, its very quaintness fuels pride in actual accomplishments. But this way of understanding progress has a troubling side as well, which is well illustrated in British author Olaf Stapledons work Last and First Men, written very much under the influence of Haldane. The book is a future history covering some two billion years, being dictated to the author by one of the last men. During this period, eighteen species of menall of them human descendants but few recognizably humanrise and fall, first on Earth, then on Venus, then finally on Neptune. The Stapledon story, whose early millennia clearly elaborate on The Last Judgment, is rich in satire and imagination. Stapledon creates distinctive races of men with their own abilities, physical characteristics, and cultures: men that can fly, men with telepathic powers, men that are nothing more than huge brains. Civilizations rise and fall due to violence or stagnation; religions and social movements form on the basis of misunderstandings; the past is forgotten and rediscovered. But at a certain point all the races face the necessity or desire for self-transcendence, the inner drive or external push to be more than themselves. And it is just at this moment that most races destroy themselveseither deliberately via successful evolution of their successors, or unintentionally by unwise use of their scientific powers. Despite the cyclical character of the story, marked by the rise and fall of different races, there is also a broad progressive tendency in the races increased power over their physical worlds, over their own bodies and minds, and finally over their own pasts. Some races are happier than others; some periods of time are more blessed. But overall, the last men look back at the story and see it as a tragedy. If actual grief has not preponderated over joy, it is because, mercifully, the fulfillment that is wholly missed cannot be conceived. The last men discover that their own end is coming due to the disintegration of the Sun, and they cannot conceive of a way to save themselves. Instead, they engage in two god-like efforts. The first is an attempt to redeem the tragic past by participation in it, exemplified by sending this history back to their ancestors. (Stapledon does not here trouble himself much with the paradoxes of time travel.) The last men hope that what they see as signs of providencesigns for which they are not responsibleare evidence of a future intelligence yet greater than their own. The second god-like effort is an attempt to seed the cosmos with life, in the hope of beginning somewhere else the long evolution towards intelligence. What drives them, even knowing that there is a limit to their days, is that same impulse for self-transcendence, which becomes their effort to redeem the whole tragic history of intelligent life. With the end looming, they seek to make the finite eternal: If ever the cosmic ideal could be realized, even though for a moment only, then in that time the awakened Soul of All will embrace within itself all spirits whatever throughout the whole of times wide circuit. And so to each one of them, even to the least, it will seem that he has awakened and discovered himself to be the Soul of All, knowing all things and rejoicing in all things. And though afterwards, through the inevitable decay of the stars, this most glorious vision must be lost, suddenly or in the long-drawn-out defeat of life, yet would the awakened Soul of All have eternal being, and in it each martyred spirit would have beatitude eternally, though unknown to itself in its own temporal mode. Is this passage simply like others in the story, where Stapledon is more obviously satirizing self-deceptive mystical beliefs? And are we to believe that the real future of intelligence rests with the last mens effort to seed the galaxy with life? If so, then the tragic element of the story becomes the final moral lesson: If intelligence arises again, why should not the whole bloody mess simply repeat itself in some new way? Yet it seems more likely that this passage is not satire at all, and through his own future history Stapledon comes to an important insight: perhaps the human desire for self-transcendence is really a world-transcending aspiration, an attraction to infinity. Properly understood, that attraction might open the door to genuine religious faith. Haldane approaches a similar conclusion at the end of The Last Judgment, where he acknowledges that religion and science teach some of the same lessons, although for different reasons. Religion says that it is a mistake to think that ones own ideals should be realized, because Gods ways are not our ways. Science says instead that human ideals are the products of natural processes that do not conform to them. Religion teaches an emotional attitude to the universe as a whole, a sense of human limitation that is only confirmed when science illuminates the awesome immensities and complexities of the universe. Both teach us to conjecture what purposes may be developed and to think grandly about human plans and our unselfish cooperation in them. Both religion and science, in other words, teach that events are taking place for other great and glorious ends which we can only dimly conjecture.... Without necessarily accepting such a view, one can express some of its implications in a myth. If there is even this degree of convergence between religion and science, why prefer myths of the future over existing stories of Gods presence in history? Why look to the future instead of the past? The answer, for Haldane, is because such future-oriented stories are obviously provisional, because they glorify human power and achievement and carry the authority of science, and because they can be constructed to propose no moral absolutes. Daedalus is a delightful essay, literate and witty. As a scientist, Haldane deserves credit for refusing to provide a guarantee for the human future, and he is right to suggest that our uncertainty stems from the old paradox of human freedom re-enacted with mankind for actor and the earth for stage. But for all the charm of Daedalus, Haldane does not recognize that this great paradox is being reenacted without a moral compass, and thus without any serious basis to call what may happen in the future, even if we do not destroy ourselves, genuine progress. The substitution of science fiction for religious tradition is not obviously an advance when it comes to making serious judgments about great and glorious ends, particularly if those ends finally derive from Daedalus willful quest for power. In the end, scientific progress parallels moral progress only if might does indeed make right. And while Socrates might honor the curiosity of Daedalus, even he could not accept such a blind definition of the human good. Servant of the Ruling Class Bertrand Russells reply to Haldane does not start in an especially promising way. He characterizes Daedalus as an attractive picture of the future as it may become through the use of scientific discoveries to promote human happiness, which hardly seems an adequate description of Haldanes intention or his belief that the future happiness of our descendants will probably not look attractive to us. In contrast, Russell thinks that science will continue in the future to do what it does in the present: not serve human happiness in general but serve the power of dominant groups. This is a proposition that Haldane would not necessarily deny, although he has a deeper view of exactly who is whose master. Russell then says that he will focus on some of the dangers inherent in the progress of science while we retain our present political and economic institutionsyet again, a premise with which Haldane would almost surely agree. So far, at least, there would seem to be no real debate between the two men. Like Haldane, Russell divides his discussion into various fields of science (physical, biological, anthropological), and he freely combines projection into the future with satiric commentary on the present. In laying out his broad purpose, Russell eventually adumbrates his first real differences from Haldane. Acknowledging the huge effect science has made in shaping the world since Queen Annes time, Russell observes that the impact of science can take two basic forms: first, without altering mens passions or their general outlook, it may increase their power of gratifying their desires, and second, it may change their outlook on the world, the theology or philosophy which is accepted by energetic men. Russell will focus, he says, on the first kind of effect: how science serves existing desires rather than how it creates new worldviews. This restriction appears curious at first sight, for it gives the appearance of circularity to Russells understanding of the results of scientific progress. If he thinks science is problematic under present circumstances, it may be because he is not interested in thinking (? la Haldane) about the manner in which science may form and change those circumstances. Perhaps he sees science serving the interests of todays dominant groups because he is not considering how it might create new dominant groups. Russell thus excludes from the start the possibility that science will be anything but conservative, and he appears at first critical of modern science precisely for this conservatism. The divide between the two men turns out to revolve precisely around this difference of emphasis. The key to Russells response to Haldane is understanding why Russell thinks that, on balance, science is more likely to serve existing power structures than to challenge them. Russell announces his answer in brief early on: Science has increased mans control over nature, and might therefore be supposed likely to increase his happiness and well being. This would be the case if men were rational, but in fact they are bundles of passions and instincts. The Cynical Utopian Russells focus in Icarus is on the physical and anthropological sciences, which he sees as having had a fourfold effect: increase of population, increase of comfort, increased energy for war, and increased need for large-scale organization. The fact that modern industrialism is a struggle between nations for two things, markets and raw materials, as well as for the sheer pleasure of domination, means that war and large-scale organizations are particularly important. The place of science in this struggle is ambiguous. While on one page he says that the national character of organizational rivalry is something with which science has nothing to do, just a couple of pages later he concludes that the harm that is being done by science and industrialism is almost wholly due to the fact that, while they have proved strong enough to produce a national organization of economic forces, they have not proved strong enough to produce an international organization. What stands in the way of international organization, he argues, is that the pleasure produced by rivalry is the driving motivation among the few rich men who control big business. To think that their goal is wealth is to misunderstand them, like thinking that scoring goals is the point of soccer. Were that true, teams would cooperate, for then many more goals could be scored. So too with business: more cooperation would mean more wealth. But in both instances, the really important thing, the team rivalry, would be missing. The power vested in these large organizations is already so great that the ideals of liberalism are wholly inapplicable to the modern world; there is no liberty except for those who control the sources of economic power, no free competition except between States by means of armaments. The only hope for freedom or democracy in a scientific civilization would be if economic and nationalistic competition were to produce one big winner, establishing a cruel and despotic global tyranny. But in time, Russell hopes, the energy of the tyrants at the top might flag, leaving behind a stable world-organization, a diminishment of the evils which now threaten civilization, and a more thorough democracy than that which now exists. Where Haldane looks to the possibility of self-destruction as the potential impetus to moral progress, Russell looks to tyranny as the potential pathway to peace. Both Russell and Haldane believe that scientific progress will be best assured under world government. But why this should be so requires some elucidation. Clearly, the key problem for Russell is rivalry combined with the power of modern science, which is one powerful example of how our passions and instincts lead to irrational results as circumstances change. It is clear how tyrannical centralized control could use the power of science to limit rivalry, but less clear how rivalry would not arise even with world organization, once that control loosened and the organization became a more thorough democracy. A telling example of how Russell sees world government and its relationship to science comes when he discusses the need to implement birth control measuresparticularly, he seems to expect, among non-white races, so that no nation will grow much faster than others. He expects white races, already showing signs of population decline, to use more prolific races as mercenaries, threatening a revolt that ends in the extermination of the white races. The casual racialism behind such thinking, however common at the time among progressive intellectuals, confirms the extent to which world government, tyrannical or not, is unlikely to be premised on human political equality. When it comes to eugenics and the goal of producing a better race, however, Russell is not a na?ve inegalitarian, and it is here that we reach the crux of his disagreement with Haldane. Like Haldane, Russell expects that eugenic efforts will be attempted and may even work, but on the whole he is skeptical about the moral prospects of positive eugenics. Where Haldane imagines democratic campaigning for this or that eugenic ideal (Vote for Smith and more musicians), Russell thinks that such decisions would of course be in the hands of State officials, presumably elderly medical men. Whether they would be preferable to Nature I do not feel sure. I suspect they would breed a subservient population, convenient to rulers but incapable of initiative. However, it may be I am too skeptical of the wisdom of officials. Russell is also skeptical when it comes to the biochemical control of behavior. This novel capacity would give those in charge power beyond the dreams of the Jesuits, but there is no reason to suppose they will have more sense than the men who control education today. Technical scientific knowledge does not make men sensible in their aims, and administrators in the future, will be presumably no less stupid and no less prejudiced than they are at present. In this, at least, his utopianism about world government is moderated by his realism about human folly and perversion. Russell raises this skepticism to the level of principle: Science increases the power of those in power. If their ends are good, they can achieve more good; if their ends are evil, more evil. In the present age, the purposes of the holders of power are in the main evil, so science does harm. Science is no substitute for virtue; the heart is as necessary for a good life as the head. By heart, Russell means the sum-total of kindly impulses which make people indifferent to their own interest but in fact serve that interest, once it is properly distinguished from a rationalized impulse to injure others. Intelligence plus such deliberate desire would be enough to make the world almost a paradise. Russell is reasonably certain that science could increase the kindly impulses, but also reasonably certain it will never happen. Those who would make the discovery and administer the treatment (he imagines a secret society of physiologists kidnapping and treating world leaders) would already have to be governed by natural kindness, otherwise they would prefer to win titles and fortunes by injecting military ferocity in recruits. And so we come back to the old dilemma: only kindliness can save the world, and even if we knew how to produce kindliness we should not do so unless we were already kindly. The remaining alternatives, Russell believes, are self-extermination or world-wide domination by one group, say the United States, leading eventually to an orderly world government. Yet the sterility of the Roman empire leads Russell to conclude by wondering whether the collapse of our civilization is perhaps the best answer after all. Such glib and world-weary statements are part of what made Bertrand Russell the man we remember as Bertrand Russell. But there remains a serious claim being put forward. To Haldanes core assertion that science will produce progress by giving human beings the choice of reform or oblivion, Russell responds that we will likely, and perhaps even should, choose oblivion. Haldane looking forward sees future evolution as our best hope; Russell looking backward sees our evolutionary heritage as a fatal flaw. The full force of an analogy used by Russell at the beginning of his essay only becomes clear at the end: Dogs, he noted, overeat because they are descendants of wolves, who needed to be driven by insistent hunger. Under domestic circumstances, this retained drive hurts dogs. Likewise, human beings have instincts of power and rivalry that are inconsistent with our well-being, and hence self-destructive under present circumstances. And these instincts, it seems, are more likely to be gratified by means of science than altered. We are creatures of our nature, creatures of our passions. Coming closer to the technological brink is not likely to change this fact. This outlook helps explain why Russell does not meet Haldane head on by looking at the way science changes the outlook of energetic men. Whatever the guiding theology or philosophy of the day, however influenced it may be by modern science, natural instinct will win out. Science is no substitute for virtue, Russell notes, but he puts little weight on the ability of virtue to counter the raw human instinct for power, injury, and rivalry. Russells skepticism about the strength of virtue creates a moral vacuum, which leads him to dark and dire conclusions. One does not have to believe in mans overwhelming goodness to wonder whether Russells outlook is grounded more in fashionable cynicism than moral realism. If injury, power, and rivalry were as powerful as Russell suggests, then it is hard to see how life is not a great deal more terrible than it already is. Moreover, it is not obvious why the generous and kindly impulses must take a back seat to the darker passions. Russell assumes, at best by analogy, that the rivalrous impulses would be those more conducive to survival. But by his own admission, virtue is not simply unnatural and may act to our benefit. As an example, he cites the Quakers, who controlled a natural greedy impulse in the name of a moral principle (dont misrepresent prices) and had success as a result. If once useful impulses can become self-defeating, why cant kindly impulses take their places? In reality, we discover that virtue is of far less interest to Russell than it ought to be. His cynicism about moralitys sway over the human soul is really born of dissatisfied utopianism: If men were rational in their conduct ... intelligence would be enough to make the world almost a paradise. But as civilization is not made up mostly of Bertrand Russells, there is little hope for anything other than collapse. From this point of view, Russell looks like a disappointed Haldane, the Haldane who looks with apparent equanimity on the possibility that humanity may finally prove itself unworthy of survival by not surviving. As Haldane put it, At worst our earth is only a very small septic area in the universe, which could be sterilized without very great trouble, and conceivably is not even worth sterilizing. By different roads and for different reasons, both authors come to the same anti-human conclusion. The core difference is that Haldane believes we might become something better by shattering what we are now. The Real Meaning of Progress So where does this debate leave us? It is telling that Haldane refers to G. K. Chesterton towards both the beginning and ending of his essay. The second time he quotes lines of poetry by Chesterton, without attribution, to acknowledge yet again the potentially destructive power of the human intellect. The first time he criticizes The Napoleon of Notting Hill, which prophesied that hansom-cabs would still be in existence a hundred years hence owing to a cessation of invention. Within six years there was a hansom-cab in a museum. In commenting on this apparent failure of prediction, Haldane gives some indication that he might understand that Chesterton was not really predicting at all, but satirizing predictors just like himself, who (in Chestertons words) project small things of the present into big things of the future, just as when we see a pig in a litter that is larger than the other pigs, we know by an unalterable law of the Inscrutable it will someday be larger than an elephant. But it is also possible that Haldane missed the more serious point of Chestertons book: even if the future were to look like the present with respect to hansom-cabs, it would not mean that we are failures in the ways that matter most. There would still be ample room for the whole range of human abilities and aspirations to play themselves out both for good and for ill. This truth is likely to be lost if we understand the human story in terms of the aspirations outlined in Daedalus. Haldane believes in the possibility, although not the necessity, that science will lead to the progressive improvement of the world, because he thinks that human beliefs can accommodate themselves to the changing conditions created by the vast increases in human power. We are driven down that path by a hitherto inchoate, and potentially self-destructive, desire for self-transcendence, a desire that comes into its own when we have the power to make it real. Progress cannot be measured by human happiness, because happiness would produce stagnation. But Haldanes notion of progress is by necessity discontinuous, since the goodness of one stage of the human story will not be recognizable as such by those at a different stage. Only some imagined being of the far future, heir to the whole human narrative, might be able to look back and see (or construct) the thread that binds it all together, redeeming a chaotic and otherwise tragic past. Russell rejects Haldanes picture of progress, because he thinks that there is a fixity to those aspects of human nature that will lead us to use the increased powers granted by science to destructive ends. The powers of science could potentially be used to alter our nature, Russell believes, but our nature provides significant disincentives to doing so in any manner that will serve good ends. Generosity is in short supply, so we should not expect to be engineered or biochemically manipulated to be nicer to each other. To do so we would need to be nice already. Unlike Haldane, Russell in this essay does not explicitly make the realm of virtue and kindly impulses situational, but he does believe that morality is very weak in comparison with other drives. Absent some utopian re-ordering of the world, science really is giving matches to babies. For Russell, science places us on the edge of a cliff, and our nature is likely to push us over the edge. For Haldane, science places us on the edge of a cliff, and we cannot simply step back, while holding steady has its own risks. So we must take the leap, accept what looks to us now like a bad option, with the hope that it will look like the right choice to our descendants, who will find ways to normalize and moralize the consequences of our choice. Russell disarms virtue, Haldane relativizes it. The net result is that a debate about sciences ability to improve human life excludes serious consideration of what a good human life is, along with how it might be achieved, and therefore what the hallmarks of an improved ability to achieve it would look like. Shorn of serious moral content, the measures of progressif it can be said to exist at allbecome our amazement at or dissatisfaction with all our discoveries and inventions, our awed anticipation of what might yet be achieved, our terror about what might go wrong along the way. The result of framing the question of scientific progress in this way is evident in the very structure of most popular discussions of science, both in books and on television. Start with a little history to produce an attitude of pride that we know so much more than we once did. Look at what we know now, and stress the dangers of our remaining ignorance. Anticipate the future, and how humbled we are that those who follow us will know far more than we do if only we stick with it. Above all, the very thinness of any notion of progress that survives the Haldane-Russell debatelittle more than the fact of accumulation of knowledge and a vague hope that things might turn out well in light of unspecified yet grand civilizational projectshelps to explain the widespread belief that any effort to restrain science on the basis of ethics represents a threat to scientific progress. To see this as simply a result of the self-interest of scientists is to do them an injustice. Like Haldane, most scientists are probably unaware of how the belief that morality must adjust to scientific and technological change amounts to saying that might makes right. The sense of threat is partly due to the poverty of thought on the subject, and perhaps the narrow education that is required for making measurable scientific achievements. For restraint doubtless would slow accumulation, and (from this point of view) can only represent the triumph of fear over hope. But what is to be said for accumulation when Russell and Haldane have done with it? It serves either the power of the conventionally powerful or the power of the scientists. A clear-eyed defense of science needs to take seriously the original bargain that Haldane himself describes: that free research produces increased well-being. To investigate the meaning of well being, or doing well, means neither the dogmatic acceptance nor the dogmatic rejection of the moral values of ones neighbors. It requires avoiding cynicism and utopianism about human motives and possibilities. It requires a willingness to look at the question of the human good with care and seriousness. And even if such an investigation yields a complex and mixed picture of what a good life is and how science contributes to it, the defense of science still requires the willingness to encourage what is valued and discourage what is troublesome, knowing that we will face many grave uncertainties and honest disagreements along the way. The Greek tale of Daedalus and Icarus illustrates that doubts over the results of human knowledge and ingenuity are hardly new. The debate enshrined in Daedalus and Icarus suggests that today the great increase in our powers co-exists with a diminished capacity to think about them with any kind of moral realism. By slighting ethics, Haldane and Russell did not serve the cause of science well, since science only matters in human terms if it truly serves our humanity. And that is by no means guaranteed. ______________ Charles T. Rubin is an associate professor of political science at Duquesne University. Previous New Atlantis Articles by Charles T. Rubin [8]"Man or Machine?" (Winter 2004) [9]" Artificial Intelligence and Human Nature" (Spring 2003) Published by the [10]Ethics and Public Policy Center, Washington, D.C. References 8. http://www.thenewatlantis.com/archive/4/rubin.htm 9. http://www.thenewatlantis.com/archive/1/rubin.htm 10. http://www.eppc.org/ From checker at panix.com Wed Jul 13 22:10:17 2005 From: checker at panix.com (Premise Checker) Date: Wed, 13 Jul 2005 18:10:17 -0400 (EDT) Subject: [Paleopsych] BBC: 'Over-friendly' brain clues found Message-ID: 'Over-friendly' brain clues found http://news.bbc.co.uk/2/hi/health/4664541.stm Scientists have uncovered clues about what happens in the brain to make some people "over-friendly". US National Institute of Mental Health experts looked at differences in the brains of people with an abnormality which makes them highly sociable. Researchers used scans to identify areas which failed to work properly when they saw frightening faces. In Nature Neuroscience, they say this could give clues for understanding social disorders in others. Scary faces People with the genetic condition Williams Syndrome lack around 21 genes on chromosome seven. ---- This may be the first study to identify functional disturbances in a brain pathway associated with abnormal social behaviour caused by a genetic disorder Thomas Insel, director of the National Institute of Mental Health ----- Their lack of fear means they will impulsively engage in social situations, even with strangers. But they often have heightened anxiety about non-human fears, such as spiders or heights. The condition affects around one in 25,000 people. The US team focused on the amygdala, an almond-shaped structure deep in the brain which has been thought to help regulate social behaviour. fMRI (functional Magnetic Resonance Imaging) scans was used to study the brains of 13 healthy volunteers and 13 with Williams Syndrome. All were shown pictures of angry or scary faces. In healthy brains, seeing such images would provoke a strong response in the amygdala. However the fMRI scans showed far less activity in those of people with Williams Syndrome. Study participants were then shown pictures of threatening scenes, such as plane crashes, which did not have any people or faces in them. The amygdala response was seen to be abnormally increased in participants with Williams Syndrome. 'Poor orchestration' The researchers also identified three key areas of the prefrontal cortex, located in the front part of the brain, which did not behave normally in people with the syndrome. They were the dorsolateral area - linked to establishing and maintaining social goals governing an interaction; the medial area - associated with empathy and regulating negative emotions; and the orbitofrontal region - involved in assigning emotional values to a situation. Thomas Insel, director of the NIMH, said: "Social interactions are central to human experience and well-being, and are adversely affected in psychiatric illness. "This may be the first study to identify functional disturbances in a brain pathway associated with abnormal social behaviour caused by a genetic disorder." From checker at panix.com Wed Jul 13 22:10:36 2005 From: checker at panix.com (Premise Checker) Date: Wed, 13 Jul 2005 18:10:36 -0400 (EDT) Subject: [Paleopsych] Zenit: On Feminism, Eugenics and "Reproductive Rights" Message-ID: On Feminism, Eugenics and "Reproductive Rights" Zenit News Agency - The World Seen From Rome http://www.zenit.org/english/visualizza.phtml?sid=74119 Code: ZE05071201 Date: 2005-07-12 Interview With Journalist Eugenia Roccella ROME, JULY 12, 2005 (Zenit.org).- "Reproductive rights" are a means to wield demographic control in poor countries and to destroy the experience of being a woman, says journalist Eugenia Roccella. A 1970s leader of the women's liberation movement, Roccella is the author of essays on feminism and women's literature. With Lucetta Scaraffia, she has just published the book "Against Christianity: The U.N. and European Union as New Ideology," published by Piemme. In this interview with ZENIT, Roccella talks about the anti-birth ideology of international institutions such as the United Nations and European Union. Q: You maintain that so-called reproductive rights are a deception to foster family planning and genetically selective births. Can you explain the evolution of "reproductive rights" and how opposition to births has been transformed into eugenics? Roccella: What must be clarified in the first place is that so-called reproductive rights are in reality rights not to reproduce oneself, and they have been made concrete in governments' control over feminine fertility by a worldwide policy of dissemination of abortion, contraception and, above all, sterilization. It is generally believed that the adoption of these rights by international organizations has been a victory of the women's movement. But from the documents one can see that this is not so. Historically, the right to family planning arose from the pressure of powerful international anti-birth lobbies -- for example, the Rockefeller Foundation -- helped by the West's desire to exercise demographic control over the Third World. Suffice it to consult the excellent documentation in the book provided by Assuntina Morresi, which demonstrates how much associations of a eugenic vein have influenced U.N. policies, through NGOs such as, for example, the IPPF [International Planned Parenthood Foundation]. Anti-birth attitudes and eugenics have been closely intertwined from the beginning: The idea of building a better world through genetic selection was very widespread at the start of the 20th century, and enjoyed great credibility even in learned circles. The objective was to prevent the reproduction of human beings regarded as second-class, namely, genetically imperfect, even through coercion. The adoption of eugenic theories by the Nazi regime discredited the theories and elicited international condemnation. But associations born for this purpose -- among them, precisely, the IPPF -- have survived, changing their language and using, in an astute and careless way after the '70s, some slogans of the women's movement, such as "free choice." In reality, international conferences on population, that is, on demographic control, have always preceded conferences on women, and have prepared their code words. For example, it was at the Cairo Conference of 1994 on population and development that the old "family planning" was replaced by the new definition of "reproductive rights." The following year, the definition was uncritically accepted and appropriated by the Women's Conference in Beijing, without changing a comma. Feminism has been, paradoxically, an easy mask to implement control practices that are often savage and violent on women's bodies, especially in Third World countries. In the book, among other things, we illustrate some cases by way of example, such as the anti-natal policies adopted in China, Iran, India and Bangladesh, where poverty and the absence of consolidated democratic mechanisms have made women easy victims of experimentation, contraceptives dangerous to health, massive sterilizations and forced abortions. Q: It is a widespread opinion that the feminist movement has contributed to the obtaining of women's rights. You maintain, instead, that there are ambiguities and mistakes. Could you explain what these are? Roccella: Feminism is a galaxy of different movements and philosophies which is absolutely not homogenous. International organizations have adopted a rigidly emancipating version which tries to equate men and women as much as possible. This is translated, for example, in the idea -- never explicitly stated but always present -- that maternity is an impediment to women's fulfillment, and not a central element of the gender's identity which must be valued and protected. Thus, in the U.N. and the European Union an institutional feminism has been created based altogether on individual rights and parity, which has chosen reproductive rights as its own qualifying objective. There is, instead, a feminine philosophy of an opposite sign -- the so-called philosophy of difference -- which maintains that the myth of equality prevents women from thinking of themselves autonomously, and that the sexual difference, rooted in the body, is not only a biological fact, but something that encompasses the whole experience of being woman. With this feminism, the Church has had an open dialogue for a long time; suffice it to read Pope Wojtyla's letter on the feminine genius, and especially the most recent one addressed to bishops and signed by the then Cardinal Ratzinger. But at present, at the international level, it is the feminism "of rights" which has prevailed, imposing reproductive rights as a flag that must be flown always and everywhere. Instead, women's priorities, in the various geographic areas, are different: In Africa, there is the urgent and dramatic problem of containing birth and postnatal mortality. There is also the problem of sexually transmitted diseases and malnutrition. In the Muslim theocracies the objective for women is legislative equality and liberation from the oppressive control over public behavior -- for example, the use of the burkha. In Europe, the problems are altogether different, and so on. The U.N. resolutions stem from the assumption that the offers of abortion and contraception are, in any context, elements of emancipation, including empowerment, that is, the enhancement of women's power. But the concrete cases analyzed in the book show that this is not the case. In Iran, for example, programs for the dissemination of control of fertility have been very successful, but women continue to be regarded as second-class citizens, subject to masculine authority. Q: On the great topics regarding the defense of life and of the natural family, the Holy See has often confronted the international organizations, particularly the United Nations and the European Union. You entitled one of the chapters in the book "Europe Against the Vatican." Could you explain the essence of the controversy? Roccella: The prevailing cultural plan in Europe is a secularist extremism that regards religions as potential bearers of fundamentalist demands. The European Union, however, adopts many precautions, both political as well as verbal, in the face of the Muslim world. They are precautions that would be comprehensible if they did not create a visible imbalance vis-?-vis the Vatican, which instead is attacked with perfect serenity every time it is possible. The result is that Catholicism appears as the bitterest enemy of woman in the international realm, because it is opposed to the ideology of reproductive rights and demographic control. This cultural operation is resolved in a sort of suicide of identity, as has already occurred with the mention of the Christian roots in the European Constitution. ... It must not be forgotten that, from the beginning, Christianity has had an extraordinary idea of woman, and it is no accident if the fight for sexual equality has developed essentially in the Christian area. Among all the religions, the Christian religion is the only one, for example, whose rite of initiation, baptism, is open to both sexes. Within the Catholic realm there is a strong feminist philosophy, and the two last papacies have given great cultural dignity to this philosophy. But all this is silenced by a plan that favors the anti-religious element. The EU, even if it maintained the same policy, could modulate in a different way its attitude to the different religious creeds, fostering motives for agreement. For example, it would be easy to find instances of unity with the Holy See on the protection of maternity, on international policies against maternal and infant mortality and on feminine schooling, or even on the recognition of women's political and economic rights. Instead, preference is given to putting all religions in the same bag and pointing to the Vatican as the enemy par excellence of feminine emancipation. From checker at panix.com Wed Jul 13 22:17:05 2005 From: checker at panix.com (Premise Checker) Date: Wed, 13 Jul 2005 18:17:05 -0400 (EDT) Subject: [Paleopsych] BBC: Why 'imaginary voices' are male Message-ID: Why 'imaginary voices' are male http://news.bbc.co.uk/2/hi/uk_news/education/4675103.stm A university research team says it has discovered why most people "hearing voices" in hallucinations say they hear male voices. Dr Michael Hunter's research at the University of Sheffield says that male voices are less complex to produce than female. As such, when the brain spontaneously produces its own "voices", a male voice is more likely to have been generated. Among both men and women, 71% of such "false" voices are male. 'False perception' "Psychiatrists believe that these auditory hallucinations are caused when the brain spontaneously activates, creating a false perception of a voice," says Professor Hunter of the university's psychiatry department. "The reason these voices are usually male could be explained by the fact that the female voice is so much more complex that the brain would find it much harder to create a false female voice accurately than a false male voice," he says. Such imaginary voices are typically likely to be middle-aged and carry "derogatory" messages. The research, published in NeuroImage, shows how the brain interprets information from human voices - and how female and male voices activate different parts of the brain. "The female voice is more complex than the male voice, due to differences in the size and shape of the vocal cords and larynx between women and men, and also due to women having greater natural 'melody' in their voices. "This causes a more complex range of sound frequencies than in a male voice," says Professor Hunter. These gender differences in voices trigger responses in different parts of brain - and as the male version is simpler, both men and women who hear voices, are on average more likely to produce a male-sounding voice. The research says that "auditory verbal hallucinations" are a symptom of schizophrenia and "occur in 40% to 60% of patients who suffer from the condition". From checker at panix.com Wed Jul 13 22:17:11 2005 From: checker at panix.com (Premise Checker) Date: Wed, 13 Jul 2005 18:17:11 -0400 (EDT) Subject: [Paleopsych] BBC: Universe 'too queer' to grasp Message-ID: Universe 'too queer' to grasp http://news.bbc.co.uk/2/hi/science/nature/4676751.stm Scientist Professor Richard Dawkins has opened a global conference of big thinkers warning that our Universe may be just "too queer" to understand. Professor Dawkins, the renowned Selfish Gene author from Oxford University, said we were living in a "middle world" reality that we have created. Experts in design, technology, and entertainment have gathered in Oxford to share their ideas about our futures. TED (Technology, Entertainment and Design) is already a top US event. It is the first time the event, TED Global, has been held in Europe. Species software Professor Dawkins' opening talk, in a session called Meme Power, explored the ways in which humans invent their own realities to make sense of the infinitely complex worlds they are in; worlds made more complex by ideas such as quantum physics which is beyond most human understanding. "Are there things about the Universe that will be forever beyond our grasp, in principle, ungraspable in any mind, however superior?" he asked. ---- Events of 7/7 and 9/11 remind us that we do not live in three different worlds. We live in one world Ashraf Ghani, former Afghan finance minister ----- "Successive generations have come to terms with the increasing queerness of the Universe." Each species, in fact, has a different "reality". They work with different "software" to make them feel comfortable, he suggested. Because different species live in different models of the world, there was a discomfiting variety of real worlds, he suggested. "Middle world is like the narrow range of the electromagnetic spectrum that we see," he said. "Middle world is the narrow range of reality that we judge to be normal as opposed to the queerness that we judge to be very small or very large." He mused that perhaps children should be given computer games to play with that familiarise them with quantum physics concepts. "It would make an interesting experiment," he told the BBC News website. ET worlds Our brains had evolved to help us survive within the scale and orders of magnitude within which we exist, said Professor Dawkins. We think that rocks and crystals are solid when in fact they were made up mostly of spaces in between atoms, he argued. This, he said, was just the way our brains thought about things in order to help us navigate our "middle sized" world - the medium scale environment - a world in which we cannot see individual atoms. Because we exist in such a limited section of the universe, and given its enormous scale, we cannot expect to be the only organisms within it, Professor Dawkins believes. He concluded with the thought that if he could re-engineer his brain in any way he would make himself a genius mathematician. He would also want to time travel to when dinosaurs roamed the Earth. More serious focus Developing world economist and businesswoman Jacqueline Novogratz brought Professor Dawkins' thinking into focus, arguing that we need to fully engage with "developing worlds" to move away from "them and us" thinking. "The world is talking about global poverty and Africa in ways I have never seen in my life," she said. "At the same time I have a fear that the victories of G8 will see that as our moral absolution. But that is chapter one; celebrate it, close it and recognise we need a chapter two - a 'how to'. "The only way to end poverty is to build viable systems on the ground that can deliver services to the poor in ways that are sustainable," she said. Former Afghan finance minister Ashraf Ghani added that globalisation was "on speed" and needed real private investment and opportunities to flourish. "Events of 7/7 and 9/11 remind us that we do not live in three different worlds; we live in one world." He criticised the West for being only concerned with design issues that affect them, and solving environmental problems for themselves. "You are problem solvers but are not engaging in problems of corruption," he told TED Global delegates. "You stay away from design for developments. Your designs are selfish; it is for your own immediate use. "We need your imagination to be brought to bear on problems the way meme is supposed to. It is at the intersection of ideas that new ideas and breakthroughs occur." More than 300 leading scientists, musicians, playwrights, as well as technology pioneers and future thinkers have gathered for the conference which runs from 12 to 15 July. From checker at panix.com Wed Jul 13 22:17:16 2005 From: checker at panix.com (Premise Checker) Date: Wed, 13 Jul 2005 18:17:16 -0400 (EDT) Subject: [Paleopsych] UPI: Scientists find clues to memory health. Message-ID: Scientists find clues to memory health By K.L. CAPOZZA http://www.sciencedaily.com/upi/index.php?feed=Science&article=UPI-1-20050712-12442400-bc-us-memory.xml SAN FRANCISCO, July 12 (UPI) -- Misplaced keys, faltering name recall, incomplete thoughts -- by age 50, many otherwise healthy adults begin to notice these insidious symptoms, all signs of short-term memory loss. Indeed, as we age, our memory function can decline by as much as 45 percent, researchers have found. Much remains to be learned about the processes that underlie memory loss, but science is beginning to discover ways to abate -- and possibly to halt -- cognitive decline. According to Michael Merzenich, chief scientific officer with Posit Science in San Francisco, the key to memory longevity is lifelong learning. "Often, as people age, they engage in less and less learning," Merzenich told United Press International. "They rest on their laurels, and their environments, even if stimulating (such as a job or hobbies), do not drive new learning." Merzenich's company is pioneering brain-training exercises for aging adults that, like calisthenics, keep the organ flexible, in good physical shape and functioning well into the golden years. The company's computer-guided exercises -- which are being marketing to assisted-living and retirement communities -- aim at augmenting memory and improving visual acuity and hearing. The memory exercises should be practiced five days a week for an hour a day for eight weeks -- a demanding regimen, but one that researchers think may mitigate memory loss. "As the brain gets into ruts, it is not challenged with new learning, and without crucial stimulation, the brain's function can gradually erode over time, leading to decreased memory and cognitive function," Merzenich explained. Undertaking a rigorous "brain fitness" program later in life may be only part of the answer, said Dr. Thomas Crook, former chief of the National Institute of Mental Health's Geriatric Psychopharmacology Program. Diet plays an absolutely key role in determining brain function later in life, he said, and establishing healthy eating habits early on can deliver dividends in old age. "Diet is very important. A generalization would be that those things that are good for the heart are good for the brain as well," Crook told UPI. "We eat such massive amounts of food in this country that we end up with obesity and diabetes, which are in themselves problematic for memory." Likewise, exercise appears to contribute to better brain health, he said. "A lot of research is showing that aerobic exercise is particularly helpful," Crook said. "Even 30 minutes of walking per day can help. We know that vascular changes in the heart also apply to the brain, and exercise benefits both." The cartoon character Popeye may have been on to something, with his enthusiastic endorsement of spinach. According to a 2005 study by Harvard University researchers, fruit and vegetable intake is inversely related to cognitive decline -- the more fresh foods you eat, the better your chances of maintaining brain health. The Harvard group followed a cohort of female subjects from 1976 to 2001 and tracked their eating habits along with mental function over four decades. They found that the women who ate the highest amounts of green leafy vegetables (such as broccoli, greens and spinach) had the slowest mental decline. "The finding with cruciferous vegetables, we believe, may be because they are nutrient dense -- good source of vitamin C, beta carotene, B vitamins, which have all been found in some studies to be associated with better cognition," said Jae Hee Kang, lead author of the Harvard study. Crook also noted that a new compound of neuropeptides marketed as a dietary supplement appears to enhance nerve-cell synaptic and dendritic growth -- a process associated with improved memory. "We think the supplement is a useful addition to a heart healthy diet that includes low-fat food and modest portion sizes," he said. Crook conducted clinical trials on the compound, which has not yet hit the market. Despite his enthusiasm for the new supplement's potential benefits, however, he said most "nutriceuticals," including the much-touted gingko biloba, do not work. "There's no sound evidence that Ginkgo, nor any of the witch's brews sold under clever names, improves learning and memory," Crook said. "I'm really quite negative about nutriceuticals in general." Before adults over age 50 start popping supplements and loading up on spinach, they should consult their physicians, who can assess if their perceived forgetfulness is, in fact, attributable to age-related memory loss. Sometimes, absent-mindedness may not be serious and can be confused with something as simple as fatigue, University of California, San Diego, researchers have found. They wrote in the July 2005 issue of the Journal of the American Geriatric Society that senior adults have more difficulty getting a good night's sleep because the body's circadian rhythms change with age. Seniors also may experience insomnia as a side effect of one of the many medications prescribed to older adults. The bottom line is that a good memory -- like a fabulously fit body -- requires good habits, diligence and discipline, Crook said. "It doesn't happen magically," he said. "It's like being in shape; you have to do a lot of work and exercise to get better at it." From shovland at mindspring.com Thu Jul 14 07:32:09 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Thu, 14 Jul 2005 09:32:09 +0200 (GMT+02:00) Subject: [Paleopsych] suicide bombings Message-ID: <3294469.1121326329485.JavaMail.root@wamui-andean.atl.sa.earthlink.net> (from Venice) Technically, there is no reason to use suicide bombers- one could just throw a grenade or satchel. So there is a "spiritual" element in this- a human sacrifice for a cause, like the Buddhist monks torching themselves in Nam. -----Original Message----- From: Michael Christopher Sent: Jul 12, 2005 12:57 AM To: paleopsych at paleopsych.org Subject: [Paleopsych] suicide bombings Christian says: >>Following Steve's comment "killing civilians may make perfect sense". I want to add that killing civilians is not an "evil" strategy that resistance fighthers would not use hadn't they been infiltrated by terrorists. The strategy makes sense.<< --I think the major reason suicide bombings are used is that they result in high levels of media coverage, regardless of whether the tactic "works" in any strategic context. It works for groups that want increased status in pro-terrorist circles, it gets them fame and says, "We are a force to be reckoned with". But that's also a risk, if an attack is too outrageous and results in the terrorists being labeled murderers rather than martyrs in the Islamic and Arabic press. Dishonor is worse than death to a suicide bomber, and the best way to marginalize terrosists is to dishonor them, to stain their reputation in the eyes of the audience they seek to influence. Making them look ineffective is one way to do that, referring to them as apostates and murderers rather than martyrs is another. It's safe to say that, at least in the eyes of the victims, suicide bombings are "evil" if the word has any meaning. Most soldiers are taught to demonize and dehumanize their enemy in order to kill without guilt-induced paralysis, but those who deliberately attack civilians are going a level beyond, and it should be noted that "what works" and "what is permissible in a civilized world" are two different things. The message must be sent that terrorism fails, backfires, draws shame upon the groups that rely on it as a tactic. It should be equated with child molestation, not with the courage of conventional soldiers up against a greater force. Ideally, nonviolent resistance groups should be given opportunities to be seen as effective and powerful at the same time, increasing their status. The message should be, "Reject terrorism and repel an occupying government by appealing to the conscience of its people, rather than their fear". This makes pragmatic sense -- when Americans or the British are afraid, they tend to want to feel powerful, and that means not giving in, not backing down. But both groups have high levels of compassion and would easily support a Palestinian nonviolence movement, or an Iraqi movement to end occupation through nonviolent resistance. If we saw Israeli tanks rolling over Palestinians on a daily basis, we'd all be demanding an end to the occupation, divesting from Israel and denouncing Israel's leadership. But if suicide bombings are on the front page, nobody will even notice if a tank rolls over a nonviolent protester. Regarding the "terrorists are driven to do it" argument that comes up pretty often, it possible to explain the behavior of anyone, from an Israeli soldier to a Palestinian suicide bomber to a serial killer or child molester in any nation, in terms of cause and effect -- as long as one is consistent (i.e. all behavior is caused by something, regardless of the perpetrator). But, in a paradox, one side's basest behavior is often framed as a product of another side's provokations, while the other side's behavior is assumed to be uncaused and therefore inexcusable. "Palestinian suicide bombers are driven to do it... Israelis are the real terrorists" is a common example. More consistent would be to say "Members of both sides are driven to do what they do". Palestinian suicide bombers and abusive Israeli police (as opposed to conventional Palestinian fighters and ordinary Israeli police) are both driven to do what they do, and both should be marginalized so that their actions gain them no status or respect. Whatever the cause of suicide bombings, the effect should be the total and unambiguous rejection of terrorist tactics by every nation on earth. The alternative, sending the message that "terrorism works" is intolerable. The consequences of sending that message would be equivalent to rewarding urban gangs by withdrawing police patrols from their "turf", or excusing police brutality on the grounds that cops "have bad days and are entitled to let off a little steam". >>If civilians are the only targets they can reach then that's what they will hit.<< --So here's a catch-22. Should Israel and the US make their soldiers easier to kill in order to discourage attacks against civilians, or should they leave, with terrorist groups getting a corresponding increase in status for having repelled the occupiers? If terrorism succeeds in Spain AND Britain, every group that wants *anything* will consider it an effective option. If the US withdraws from Iraq and Iraqi moderates are terrorized out of power, the results will also be pretty intolerable. So the question for any military power that finds itself up against decentralized terrorism is "How do we reinforce the message that terrorism fails and is rejected by all civilized people, while rewarding groups that reject terrorism as a tactic." Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From checker at panix.com Thu Jul 14 21:05:37 2005 From: checker at panix.com (Premise Checker) Date: Thu, 14 Jul 2005 17:05:37 -0400 (EDT) Subject: [Paleopsych] UPI: Scientists Study Music Hallucinations Message-ID: Scientists Study Music Hallucinations http://www.sciencedaily.com/upi/index.php?feed=Science&article=UPI-1-20050712-12304700-bc-wales-hallucinations.xml [Thanks to Laird for this and the last one.] NEWPORT, Wales, July 12 (UPI) -- Psychiatrists at St. Cadoc's Hospital in Wales have issued the largest case-series study ever published concerning musical hallucinations. Although the condition has been known for more than a century, it has rarely been studied, The New York Times reported Tuesday. It is believed musical hallucinations result from malfunctioning brain networks. Dr. Victor Aziz and Dr. Nick Warner analyzed 30 cases of musical hallucination covering 15 years and found in two-thirds of the cases musical hallucinations were the only mental disturbance experienced by the patients. Women tended to suffer musical hallucinations more than men, and the average patient was 78 years old. Religious music was heard in two-thirds of the cases. The researchers noted musical hallucinations differ from the auditory hallucinations of people with schizophrenia in that only music is heard. There is no treatment, with some doctors trying antipsychotic drugs, with others using cognitive behavioral therapy. Aziz told the Times he suspects musical hallucinations will become more commonplace since people today are awash in music from radios, televisions, elevators and supermarkets. "I hope I live long enough to find out myself in 20 years time," he said. The study appears in the July issue of the journal Psychopathology. From checker at panix.com Thu Jul 14 21:05:54 2005 From: checker at panix.com (Premise Checker) Date: Thu, 14 Jul 2005 17:05:54 -0400 (EDT) Subject: [Paleopsych] Paul R. Ehrlich and Simon A. Levin: The Evolution of Norms Message-ID: Paul R. Ehrlich and Simon A. Levin: The Evolution of Norms http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pbio.0030194 Paul R. Ehrlich is with the Department of Biological Sciences, Stanford University (Stanford, California, United States of America). Simon A. Levin is with the Department of Ecology and Evolutionary Biology, Princeton University (Princeton, New Jersey, United States of America). E-mail: slevin at princeton.edu Published: June 14, 2005 DOI: 10.1371/journal.pbio.0030194 Citation: Ehrlich PR, Levin SA (2005) The Evolution of Norms. PLoS Biol 3(6): e194 Over the past century and a half, we have made enormous progress in assembling a coherent picture of genetic evolution.that is, changes in the pools of genetic information possessed by populations, the genetic differentiation of populations (speciation) (see summaries in [1,2]), and the application of that understanding to the physical evolution of Homo sapiens and its forebears ([3]; e.g., [4,5]). But human beings, in addition to being products of biological evolution, are.vastly more than any other organisms.also products of a process of .cultural evolution.. Cultural evolution consists of changes in the nongenetic information stored in brains, stories, songs, books, computer disks, and the like. Despite some important first steps, no integrated picture of the process of cultural evolution that has the explanatory power of the theory of genetic evolution has yet emerged. Much of the effort to examine cultural evolution has focused on interactions of the genetic and cultural processes (e.g., [6], see also references in [7]). This focus, however, provides a sometimes misleading perspective, since most of the behavior of our species that is of interest to policy makers is a product of the portion of cultural evolution [8] that occurs so rapidly that genetic change is irrelevant. There is a long-recognized need both to understand the process of human cultural evolution per se and to find ways of altering its course (an operation in which institutions as diverse as schools, prisons, and governments have long been engaged). In a world threatened by weapons of mass destruction and escalating environmental deterioration, the need to change our behavior to avoid a global collapse [9] has become urgent. A clear understanding of how cultural changes interact with individual actions is central to informing democratically and humanely guided efforts to influence cultural evolution. While most of the effort to understand that evolution has come from the social sciences, biologists have also struggled with the issue (e.g., p. 285 of [10], [11.16], and p. 62 of [17]). We argue that biologists and social scientists need one another and must collectively direct more of their attention to understanding how social norms develop and change. Therefore, we offer this review of the challenge in order to emphasize its multidisciplinary dimensions and thereby to recruit a broader mixture of scientists into a more integrated effort to develop a theory of change in social norms.and, eventually, cultural evolution as a whole. What Are the Relevant Units of Culture? Norms (within this paper understood to include conventions or customs) are representative or typical patterns and rules of behavior in a human group [18], often supported by legal or other sanctions. Those sanctions, norms in themselves, have been called .metanorms. when failure to enforce them is punished [17,19,20]. In our (liberal) usage, norms are standard or ideal behaviors .typical. of groups. Whether these indeed represent the average behaviors of individuals in the groups is an open question, and depends on levels of conformity. Conformity or nonconformity with these norms are attributes of individuals, and, of course, heterogeneity in those attributes is important to how norms evolve. Norms and metanorms provide a cultural .stickiness. (p. 10 of [21]) or viscosity that can help sustain adaptive behavior and retard detrimental changes, but that equally can inhibit the introduction and spread of beneficial ones. It is in altering normative attitudes that changes can be implemented. Here, we review the daunting problem of understanding how norms change, discuss some basic issues, argue that progress will depend on the development of a comprehensive quantitative theory of the initiation and spread of norms (and ultimately all elements of culture), and introduce some preliminary models that examine the spread of norms in space or on social networks. Most models of complex systems are meant to extract signal from noise, suppressing extraneous detail and thereby allowing an examination of the influence of the dominant forces that drive the dynamics of pattern and process. To this end, models necessarily introduce some extreme simplifying assumptions. Early attempts to model cultural evolution have searched for parallels of the population genetic models used to analyze genetic evolution. A popular analogy, both tempting and facile, has been that there are cultural analogues of genes, termed .memes. [22,23], which function as replicable cultural units. Memes can be ideas, behaviors, patterns, units of information, and so on. But the differences between genes and memes makes the analogy inappropriate, and .memetics. has not led to real understanding of cultural evolution. Genes are relatively stable, mutating rarely, and those changes that do occur usually result in nonfunctional products. In contrast, memes are extremely mutable, often transforming considerably with each transmission. Among humans, genes can only pass unidirectionally from one generation to the next (vertically), normally through intimate contact. But ideas (or .memes.) now regularly pass between individuals distant from each other in space and time, within generations, and even backwards through generations. Through mass media or the Internet, a single individual can influence millions of others within a very short period of time. Individuals have no choice in what genes they incorporate into their store of genetic information, and the storage is permanent. But we are constantly filtering what will be added to our stored cultural information, and our filters even differentiate according to the way the same idea is presented [24,25]. People often deliberately reduce the store of data (for example, when computer disks are erased, old books and reprints discarded, etc.), or do so involuntarily, as when unreinforced names or telephone numbers are dropped from memory. Such qualitative differences, among others, ensure that simple models of cultural evolution based on the analogy to genetic evolution will fail to capture a great deal of the relevant dynamics. A model framework addressed to the specific challenges of cultural evolution is needed. In the models discussed below, the most basic assumption is that the spread (or not) of norms shares important characteristics with epidemic diseases. In particular, as with diseases, norms spread horizontally and obliquely [14], as well as vertically, through infectious transfer mediated by webs of contact and influence. As with infectious diseases, norms may wax and wane, just as the popularity of norms is subject to sudden transitions [3]. On the other hand, there are unique features of cultural transmission not adequately captured by disease models, in particular the issue of .self-constructed. knowledge, which has long been a source of interest, and the development of problem-solving models in psychology ([26, 27]; D. Prentice, personal communication). New syntheses are clearly required. Microscopic Dynamic Substantial progress has been made toward the development of a mathematical theory of cultural transmission, most notably by Cavalli-Sforza and Feldman [14], and Boyd and Richerson [11]. Cavalli-Sforza and Feldman consider the interplay between heritable genetic change and cultural change. This is an important question, addressed to the longer time scale, with a view to understanding the genetic evolution of characteristics that predispose individuals to act in certain ways in specified situations. For many of the phenomena of interest, however, individual behaviors have not evolved specifically within the limited context of a single kind of challenge, but in response to a much more general class of problems. Efforts to provide genetic evolutionary explanations for human decisions today within the narrow contexts in which they occur may be frustrated because generalized responses to evolutionary forces in the distant past have lost optimality, or even adaptive value. Extant human behaviors for example may be the relics of adaptations to conditions in the distant past, when populations were smaller and technology less advanced. Attempts to understand them as adaptive in current contexts may therefore be futile. Thus, we prefer to take the genetic determinants of human behavior (that, for example, we react strongly to visual stimuli) as givens, and to ask rather how those initial conditions shape individual and social learning [3]. Similar efforts have been undertaken by others, such as Henrich and Boyd [28] and Kendal et al. [20]. The sorts of models put forth by Cavalli-Sforza and Feldman, Boyd and Richerson, and others are a beginning towards the examination of a colossal problem. To such approaches, we must add efforts to understand ideation (how an idea for a behavior that becomes a norm gets invented in first place), and filtering (which ideas are accepted and which are rejected). How many ideas just pop up in someone's brain like a mutation? How many are slowly assembled from diverse data in a single mind? How many are the result of group .brainstorming?. How, for example, did an idea like the existence of an afterlife first get generated? Why do ideas spread, and what facilitates or limits that spread? What determines which ideas make it through transmission filters? Why are broadly held norms, like religious observance, most often not universal (why, for instance, has atheism always existed [29,30])? Ideas may be simply stated, or argued for, but transmission does not necessarily entail the reception or adoption of behaviors based on the idea, e.g., [31]. What we accept, and what gets stored in long-term memory, is but a tiny sample of a bombardment of candidate ideas, and understanding the nature and origin of filters is obviously one key to understanding the life spans of ideas and associated behaviors once generated. The Emergence of Higher-Level Structure: Some Simple Models Our filters usually are themselves products of cultural evolution, just as degrees of resistance of organisms to epidemics are products of genetic evolution. Filters include the perceived opinions of others, especially those viewed as members of the same self-defined social group, which collectively attempt to limit deviance [32.34]. .Conformist transmission,. defined as the tendency to imitate the most frequent behavior in the population, can help stabilize norms [28] and indeed can be the principal mechanism underlying the endogenous emergence of norms. The robustness of norms can arise either from the slow time scales on which group norms shift, or from the inherent resistance of individuals to changing their opinions. In the simplest exploration of this, Durrett and Levin (unpublished data) have examined the dynamics of the .threshold. voter model, in which individuals change their views if the proportion of neighbors with a different opinion exceeds a specified threshold. Where the threshold is low, individuals are continually changing their opinions, and groups cannot form (Figure 1A). In contrast, at high thresholds, stickiness is high.opinions rarely change.and the system quickly becomes frozen (Figure 1B). Again, groups cannot form. In between, however, at intermediate thresholds (pure conformist transmission), groups form and persist (Figure 1C). In the simplest such models in two dimensions, unanimity of opinions will eventually occur, but only over much longer time periods than those of group formation (see also [20]). When the possibility of innovation (mutation) is introduced in a model that considers linkages among traits and group labels, and where individuals can shift groups when their views deviate from group norms sufficiently, multiple opinions and multiple groups can persist, essentially, indefinitely (Figure 1D). Figure 1. (A) Long-term patterning in the dynamics of two opinions for the threshold voter model with a low threshold. (B) Long-term patterning in the dynamics of two opinions for the threshold voter model with a high threshold. Note the existence of small, frozen clusters. (C) Long-term patterning in the dynamics of two opinions for the threshold voter model with an intermediate threshold. Note the clear emergence of group structure. (D) Long-term patterning in a model of social group formation, in which individuals imitate the opinions of others in their (two) groups, and others of similar opinions, and may switch groups when their views deviate from group norms. The formation of groups is the first step in the emergence of normative behavior; the work of Durrett and Levin shows that this can occur endogenously, caused by no more than a combination of ideation and imitation. The existence of a threshold helps to stabilize these groups, and to increase stickiness; furthermore, if threshold variation is permitted within populations, these thresholds can coevolve with group dynamics. What will the consequences be for the size distribution of groups, and for their persistence? Will group stability increase, while average size shrinks? What will be the consequences of allowing different individuals to have different thresholds, or of allowing everyone's thresholds to change with the size of the group? When payoffs reward individuals who adhere to group norms, and when individuals have different thresholds, will those thresholds evolve? The answers to such questions could provide deep insights into the mechanisms underlying the robustness of norms, and are ripe for investigation through such simple and transparent mathematical models. Modeling may also shed light on why some norms (like fashions) change so easily, while others (like foot binding in imperial China) persist over centuries, and more generally on how tastes and practices evolve in societies. Norms in art and music change rapidly and with little apparent effort at persuasion or coercion. But three-quarters of a century of communism barely dented the religious beliefs of many Russians, despite draconian attempts to suppress them [35], and several centuries of science have apparently not affected the belief of a large number of Americans in angels and creationism (e.g., [36,37]). Then there are the near-universal norms, such as the rules against most types of physical assault or theft within groups that, although they vary in their specifics, are interpreted as necessary to preserve functional societies. Group-selection explanations for such phenomena (e.g., [12]) are, we argue, neither justified nor necessary (see also pp. 221.225 of [38], [39]). Such behaviors can emerge from individual-based models, simply involving rewards to individuals who belong to groups. There are degrees: the evolution of cooperation is facilitated by tight interactions, for example when individuals interact primarily with their nearest neighbors [40,41], and the payoffs that come to individuals from such cooperation can enhance the tightness of interactions and the formation of groups. This easily explains why mutually destructive behaviors, like murder, are almost universally proscribed. Group benefits can emerge, and can enhance these effects, but it is neither necessary nor likely that group selection among groups for these behaviors overrides individual selection within groups when these groups are not composed of closely related individuals [42]. Simple models could address such things as the role of contagion in cultural evolution, recognized in one of the first works on psychology [43] in the context of religious revivals and belief, as what has been described as .pious contagion. (p. 10 of [30]). But models must also address issues such as the roles of authority or moral entrepreneurs (individuals engaged in changing a norm) [32], to say nothing of the impacts of advertising and the norm-changing efforts of the entertainment and other industries. In reality, we are intentioned agents who act with purpose. In maturing, we master the norms that have been evolved over a long period, but to which we may adapt in different ways and even (in the case of moral entrepreneurs) strive to change. For a moral entrepreneur, a group that is too small may have little influence and be not worth joining. But large groups may be too difficult to influence, so also may not be worth joining. For such individuals, there is likely an optimal group size, depending on the change the individual wants to effect. Groups also introduce ancillary benefits of membership that change the equation. Such considerations influence decisions such as whether to join a third party effort in a political campaign; understanding the interplay between individual decisions and the dynamics of party sizes is a deeply important and fascinating question, with strong ecological analogies. Groups, collectively, must also wrestle with the costs and benefits of increasing membership, thereby enhancing influence while potentially diminishing consensus and hence the perceived benefits to members. Innovation and Conservatism Cultural evolution, like biological evolution, contains what we like to call the .paradox of viscosity.. Evolving organisms must balance the need to change at an appropriate rate in response to varying environmental conditions against the need to maintain a functioning phenome. This trade-off between conservatism and adaptability, between stability and exploration, is one of the central problems in evolutionary theory. For example, how much change can there be in the genes required to maintain adaptation in a caterpillar without lethally affecting the structure and functioning of the butterfly (p. 303 of [44])? Conservatism in religion might be explained by the lack of empirical tests of religious ideas. But even in military technology and tactics, where empirical tests are superabundant, changes are slower than might be expected. For example, the British high command in World War I did not react rapidly to the realities of barbed wire, massed artillery, and machine guns [45]. Even so, the conservatism of the generals may be overrated [46]. Macroscopic Dynamics We have thus far examined the evolution of norms in isolation.as how the views of individuals (and thus the constituents of a pool of nongenetic information) change through time. But everywhere in common discourse and technical literature, it is assumed that norms are bundled into more or less discrete packages we call cultures, and that those packages themselves evolve. Recall that everyday notions such as that American culture of the 1990s was very different from that of the 1960s, that Islamic culture did not undergo the sort of reformation that convulsed Christian culture (for example, [47]), and that Alexander the Great carried Greek culture throughout the Mediterranean and as far east as Persia. The problem of defining .cultures. in cultural evolution seems analogous to that of defining .species. (or other categories) in genetic evolution. There has been a long and largely fruitless argument among taxonomists over the latter [48], and an equally fruitless debate in anthropology (and biology) on the definition of culture [39, 49.57]. Again, we suggest that the parsing of the various influences that create and sustain norms and cultures are ripe for theoretical modeling, but it must begin to incorporate the full richness on multiple scales of space, time, and complexity. Durrett and Levin [3] develop a model integrating the dynamics of clusters of linked opinions and group membership; appropriate extensions would allow group characteristics to evolve as well, but on slower time scales. The oversimplicity of models of symmetric imitation on regular grids, as represented in our simple models, must give way to those that incorporate fitnesses and feedbacks, as well as asymmetries and power brokers, on more complex networks of interaction [58]. Challenges and Hypotheses One of the major challenges for those interested in the evolution of norms is, at the most elementary level, defining a norm. This is related to another general problem of defining exactly what is changing in cultural evolution.which we might call the .meme dilemma. in honor of Dawkins' regrettably infertile notion. A second major challenge is discovering the mechanism(s) by which truly novel ideas and behaviors are generated and spread. A third is discovering the most effective ways of changing norms. We've got a long way to go before being able to meet those challenges. One place to start is to begin formulating hypotheses about the evolution of norms that can be tested with historical data, modeling, or even (in some cases) experiments. Some hypotheses we believe worth testing (and some of which may well be rejected) are given in Box 1. Box 1. Sample Hypotheses about the Evolution of Norms Hypothesis 1. Evolution of technological norms will generally be more rapid than that of ethical norms. Technological changes are generally tested promptly against environmental conditions.a round wheel wins against a hexagonal one every time, and the advantages of adopting it are clear to all. Ethical systems, on the other hand cannot often be tested against one another, and the standards of success are not only generally undetermined, they often vary from observer to observer and are the subject of ongoing controversy among philosophers. Hypothesis 2. In societies with nonreligious art, the evolution of norms in art will be more rapid than those in religion. We hypothesize that art is less important to the average individual than his or her basic system of relating to the world, and conservatism in the latter would be culturally adaptive (leading to success within a culture). Hypothesis 3. Military norms will change more in defeated nations than victorious ones. Was the Maginot Line and the generally disastrous performance of the French army in 1940 an example of a more general rule? Does success generally breed conservatism? Hypothesis 4. The spread of a norm is not independent of the spread of others, but depends on the spread of other norms (norm clusters). Does, for example, empathy decrease with social stratification? Hypothesis 5. Susceptibility to the spread of norms is negatively correlated with level of education. Are the less educated generally more conformist, or does the spread of norms depend almost entirely on the character of the norm? Hypothesis 6. Horizontal transmission will show less stickiness than vertical transmission. This conjecture is based on anecdotal observations that norms like using hula hoops come and go and are primarily horizontally transmitted, and religious values and other high-viscosity points of view are mostly vertically transmitted (p. 129 of [14], [59]). In this essay we have tried to be provocative rather than exhaustive. There is a welter of issues we have not even attempted to address, including: (1) asymmetries of power in the spread of norms, (2) the role of networks, (3) the efficacy of persuasion as opposed to imitation, (4) the cause of thresholds in the change of norms, (5) the genesis of norms during child development, (6) the connection between attitudes and actions, (8) competition among norms from different cultures; and (9) the question, can norms exist .free of people. in institutions? Institutions certainly may emerge as independent structures, stabilized by laws and customs that are enforced to varying degrees through formal punishment or social pressure. Can such norms persist long even when adherence to them is disappearing? The interplay between the dynamics of individual behaviors and normative rules, operating on different time (and other) scales, may be the key, we argue, to understanding sudden phase transitions that can transform the cultural landscape. We hope that, by being provocative, we can interest more evolutionists, behavioral biologists, and ecologists in tackling the daunting but crucial problems of cultural evolution. Few issues in science would seem to be more pressing if civilization is to survive. Acknowledgments We have received helpful critical comments from Kenneth Arrow, John Bonner, Samuel Bowles, Kai Chan, Gretchen Daily, Partha Dasgupta, Adrian deFroment, Anne Ehrlich, Marcus Feldman, Michelle Girvan, Ann Kinzig, Deborah Prentice, and Will Provine. Amy Bordvik provided invaluable assistance in preparing the manuscript for publication. From checker at panix.com Thu Jul 14 21:06:07 2005 From: checker at panix.com (Premise Checker) Date: Thu, 14 Jul 2005 17:06:07 -0400 (EDT) Subject: [Paleopsych] NYT: Financially-Set Grandparents Help Keep Families Afloat, Too Message-ID: Financially-Set Grandparents Help Keep Families Afloat, Too http://www.nytimes.com/2005/07/14/national/14grandparents.html By [3]TAMAR LEWIN When he got home from a three-day school camping trip last winter, Schuyler Duffy, a 10th grader at Friends Seminary, told his parents he had had a fantastic time and thanked them for sending him to that Manhattan private school. They reminded him that it was his grandparents who deserved the thanks. "We want Schuyler to appreciate that if my dad weren't paying the tuition, we probably wouldn't have been able to swing it," said Schuyler's mother, Christine Wade. Schuyler's grandparents, who live in Oakland, Calif., cushion their grandson's life in other ways, too, paying for his summer French program in Nova Scotia and helping with the purchase of an apartment when his family was evicted from a rent-stabilized apartment in Tribeca. It has become familiar news that grandparents are rearing millions of American children whose parents are lost to drugs, mental illness or prison. What has been less noticed - and less studied - is that even where the parents are present and functioning, grandparents play important roles in their grandchildren's lives. Some, like Ms. Wade's parents, cover the costs for tuition and real estate down payments. Others pay for summer camps, family vacations and braces. And some, with more young mothers working, care for the grandchildren a day a week or more. For many American families, intergenerational help is now moving in a new direction. "Thirty, 40 years ago, the money went up: you helped your grandparents, you bought them this or that, they might have moved in with you," said Timothy M. Smeeding, a professor of public policy at the Maxwell School of Syracuse University. "But now, all the money comes down. Most elderly people today are better off than they thought they would be, with the booming stock market of the 1990's, the rising value of homes and the changes in Social Security. Meanwhile, their kids are worse off than they thought they would be. So grandparents help out." Vern Bengtson, a sociologist and gerontologist at the University of Southern California, says the growing involvement of grandparents has been just as dramatic a change in American family life as the unraveling of the nuclear family. While sociologists in recent decades have bemoaned the high divorce rate and the percentage of children born to single mothers, Professor Bengtson said, they have for the most part overlooked the emergence of grandparents as an important resource for family support and stability. "For many Americans, multigenerational bonds are becoming more important than nuclear family ties for well-being and support over the course of their lives," he said. There is, of course, nothing new about grandparents helping to support their children's families; one way or another, grandparents have always pitched in. But as demographic changes have reshaped life paths both for older people and their adult children, the influence of grandparents has expanded. Perhaps because American culture places such emphasis on independence, many people express discomfort about discussing intergenerational help given or received. In dozens of interviews, grandparents said they did not want their names used because they worried that it would embarrass their children or did not want their grandchildren to know what they were paying for. "You'll have to ask my son whether he's comfortable having this in the newspaper," a Manhattan grandmother said. That son said no; like many others in the middle generation, he did not want it known that he was not his family's sole support. The near-taboo on the subject, Professor Bengtson said, indicates a cultural lag, with the prevailing norms and attitudes trailing far behind what is actually going on. The very presence of grandparents in their grandchildren's lives is far more common than it used to be. The likelihood that a 20-year-old these days will have a living grandmother (91 percent) is higher than the likelihood that a 20-year-old in 1900 had a living mother (83 percent), according to an analysis by Peter R. Uhlenberg, a professor at the University of North Carolina. And, while 40 years ago, 29 percent of Americans over 65 lived below the poverty line, by 2003 poverty among the elderly had declined by nearly two-thirds. At the same time, this generation of 20- and 30-somethings are taking longer to finish their education and reach self-sufficiency. "Our culture has changed so that education is priced so high, and lasts so long, that this phenomenon of economic dependency lasts much longer than it used to," said Professor Bengtson, himself a grandfather who goes to Santa Barbara each week to spend a day or two with his year-old granddaughter, Zoe Paloma Lozano. Professor Bengtson has measured the growing involvement of grandparents with the college students he teaches. For 20 years, he has been giving his students a questionnaire on how they are financing college; in just the last few years, grandparents' contributions have displaced jobs and borrowing and moved into third place, after parental help and scholarships. With tuition edging toward $30,000 a year, many Manhattan private schools have also noticed an increase in support from grandparents. At Trevor Day School, Donald D. Mordecai, assistant head for finance and operations, said the school was seeing more checks from grandparents in recent years. "I would say anecdotally that over the last three years, as the tuition's gone up, we began to see more grandparents sending in the checks," Mr. Mordecai said. "I'd guess that maybe 15 to 20 percent of the kids, especially the younger ones, have tuition paid by their grandparents." For wealthy grandparents, tuition payments can be a good estate-planning device. Under so-called 529 plans, Martin L. Greenberg, an accountant at Rosen, Seymour, Shapss, Martin & Company, said, a grandparent could contribute $55,000 toward a grandchild's college tuition without triggering any gift tax. And private-school tuition paid directly to the school does not count as a gift. "My father pays tuition for the girls, and I'm just grateful beyond belief," said Sunny Bates, who has one daughter at the Dalton School in Manhattan and another joining her there this year. "I think that's incredibly common. You have all these people who grew up in New York when it wasn't so ridiculously expensive, and are now in careers in the arts, or the nonprofits, that don't pay very much, and they couldn't possibly give their children the kind of life they had without some help." In both older generations, a few of those interviewed confessed an unhappy undercurrent to their intergenerational help. A few grandparents admitted feeling that their continuing financial help had spoiled their children and left them with an unseemly sense of entitlement. "I'm hearing a little more from my grandson these days, and I know it's because they're about to ask me to pay his college tuition," said one Manhattan grandmother, wearily. "I'll do it, like I've always done everything for them. But I don't think it's been so good for them." And some adult children can feel infringed on, when the grandmother who comes to care for the baby two days a week criticizes the mother's child-rearing, or the grandparents who make the down payment offer forceful suggestions about decorating the apartment. "I take the money, I'm grateful, but I also feel like it keeps me under their thumb in a way that doesn't feel good," said one woman whose in-laws pay for her 10-year-old's private school tuition, camp, and tutoring. "And I think it makes it harder to say no when they ask us to visit or do things with them. It's somehow like they're calling the shots in our lives." Still, in many families, grandparents are the secret ingredient that make the difference between a life of struggle and one of relative ease. In a posting last year on the Web site of the Berkeley Parents Network, which does not include names, one mother asked why her own husband's salary of $80,000 did not seem to be enough to pay for the kind of life her neighbors had. "Where are you getting all this money??" she asked. "Interesting question and one I have had myself many, many times," said one of the responses. "The way everyone I know is 'doing it' (nice cars, expensive houses, vacations, 'best' schools, etc.) is with money from their parents and grandparents. Seriously. Those down payments come from grandma and grandpa, and so does private school for the kids. Once I realized that (because friends let it 'slip') it all started to make more sense to me." In New York City, Sid Whelan and Lisa Waller and their daughters, Genevieve, 6, and Gabrielle, 2, get different kinds of help from the two sets of grandparents. Ms. Waller's parents, George and Lula Nunley, moved to New York from Chicago not long after Genevieve was born. At first, the Nunleys had their own apartment, in the same building as the Whelan/Waller family. But there was a constant flow between the apartments, so Mr. Whelan, a real estate agent and musician, suggested that they buy a Harlem townhouse big enough for all of them. Mr. Whelan's parents, who are divorced, helped with the purchase. "My dad gave us a bridge loan, and my mom gave us an interest-only loan," Mr. Whelan said. "They've helped with tuition and music lessons too." Ms. Waller's parents now live on the top floor of the townhouse. "Obviously in the back of my head, I think I should be able to do it all on my own," Mr. Whelan said, "and at times I do feel guilty for getting so much help. But at the same time, I think it's good that the kids know their grandparents and know what they're doing for them." From checker at panix.com Fri Jul 15 19:32:04 2005 From: checker at panix.com (Premise Checker) Date: Fri, 15 Jul 2005 15:32:04 -0400 (EDT) Subject: [Paleopsych] Want to Know: London Bombing Cover-up Message-ID: London Bombing Cover-up http://www.WantToKnow.info/050713londonbombingcoverup [Thanks to Laird for this.] London Bombing Strange Coincidences Suggest Cover-up "At half past nine this morning we were actually running an exercise for a company of over a thousand people in London based on simultaneous bombs going off precisely at the railway stations where it happened this morning, so I still have the hairs on the back of my neck standing up right now." -- Former Scotland Yard Official Peter Power on BBC, 7/7/05 July 13, 2005 Dear friends, Many strange facts from highly credible sources are coming in regarding the recent London bombings. The most astonishing is the following conversation which took place the afternoon of the London bombing on BBC radio. The BBC host interviewed Peter Power, Managing Director of Visor Consultants, which bills itself as a 'crisis management' advice company. Peter Power was a former Scotland Yard official. A Coincidence? POWER: At half past nine this morning we were actually running an exercise for a company of over a thousand people in London based on simultaneous bombs going off precisely at the railway stations where it happened this morning, so I still have the hairs on the back of my neck standing up right now. HOST: To get this quite straight, you were running an exercise to see how you would cope with this, and it happened while you were running the exercise? POWER: Precisely, and it was about half past nine this morning. We planned this for a company, and for obvious reasons I don't want to reveal their name but they're listening and they'll know it. And we had a room full of crisis managers for the first time they'd met. And so within five minutes we made a pretty rapid decision that this is the real one, and so we went through the correct drills of activating crisis management procedures to jump from slow time to quick time thinking. This audio clip is still available on the BBC website at: http://www.bbc.co.uk/radio/aod/fivelive_aod.shtml?fivelive/drive_thu To go directly to the above section, in the BBC Radio Player column on the left, click the >>15 min button at the top four times to fast forward to 17:04, where the statement is made. This clip will unfortunately be removed Thursday, July 14th, one week after the original broadcast. Mr. Power repeats these statements on ITN television. The two-minute video clip is available at: http://www.prisonplanet.com/articles/july2005/110705bombingexercises.htm Below is another excerpt on this matter from the website of CBC, Canada's public broadcasting TV network: http://www.cbc.ca/sunday/#night "CRISIS PLANNING: When there is an emergency like the London bombings, the public instinctively turns to professionals for help. We speak to two experts who are in Toronto today for the World Conference on Disaster Management. Adrian Gordon is the Executive Director of the Canadian Centre for Emergency Preparedness, and Peter Power is Managing Director of a London-based consulting firm that specializes in crisis management, Visor Consultants - which on the morning of July 7 was co-incidentally running a security exercise for a private firm, simulating multiple bomb explosions in the London Underground, at the same stations that were subsequently attacked in real life." And here is a two-minute video clip on CBC of Power telling his story: http://www.cbc.ca/MRL/clips/rm-lo/charles_disasters050711.rm Why didn't these astounding statements draw front-page coverage both in the UK and US? And why is Peter Power now backtracking on some of his earlier comments? This situation is strikingly similar to a situation on 9/11 which should have made front-page headlines that was reported in the Boston Globe: http://www.boston.com/news/packages/sept11/anniversary/wire_stories/0903_plane_exercise.htm "In what the government describes as a bizarre coincidence, one U.S. intelligence agency was planning an exercise last Sept. 11 in which an errant aircraft would crash into one of its buildings. But the cause wasn't terrorism -- it was to be a simulated accident. Officials at the Chantilly, Va.-based National Reconnaissance Office [NRO] had scheduled an exercise that morning in which a small corporate jet would crash into one of the four towers at the agency's headquarters building after experiencing a mechanical failure. The agency is about four miles from the runways of Washington Dulles International Airport." [So it was just a short hop for the NRO team to run over and secure the Pentagon. And by the way, the [43]NRO has a budget estimated to be [44]three times that of the US State Department] London Bombs of Military Origin Reuters news service reports below on the explosives used in the London bombing: http://today.reuters.com/news/newsArticle.aspx?type=topNews&storyID=2005-07-11T122706Z_01_N11466902_RTRIDST_0_NEWS-SECURITY-BRITAIN-INTELLIGENCE-DC.XML "'The explosives appear to be of military origin, which is very worrying,' said Christophe Chaboud, head of the French Anti-Terrorism Coordination Unit and one of five top officials sent by Paris to London immediately after Thursday's attacks." The London Times has an article confirming this: http://www.timesonline.co.uk/printFriendly/0,,1-20749-1690391-20749,00.html "A SINGLE bombmaker using high-grade military explosives is believed to be responsible for building the four devices that killed more than 50 people last week, The Times can reveal. Similar components from the explosive devices have been found at all four murder sites, leading detectives to believe that each of the 10lb rucksack bombs was the work of one man. They also believe that the materials used were not home made but sophisticated military explosives, possibly smuggled into Britain from the Balkans." Phony Al Qaeda Claims? Then MSNBC informs us that the note posted on an alleged Al Qaeda-related website claiming responsibility for the bombings may be phony: http://www.msnbc.msn.com/id/8496293/ "MSNBC TV translator Jacob Keryakes, who said that a copy of the message was later posted on a secular Web site, noted that the claim of responsibility contained an error in one of the Quranic verses it cited. That suggests that the claim may be phony, he said. 'This is not something al-Qaida would do,' he said." Blair Opposes Probe Into Bombings? And the highly respected Financial Times has an article titled: "Blair rejects calls for probe into bombings" http://news.ft.com/cms/s/8186face-f17a-11d9-9c3e-00000e2511c8.html "Tony Blair will on Monday reject Conservative demands for a government inquiry into last week's London bomb attacks, insisting such a move would distract from the task of catching the perpetrators." Does this make any sense? Why wouldn't an investigation help to prevent something like this from happening again? Blair's stance is strikingly similar to the early opposition by both President Bush and Vice President Cheney to the establishment of an independent commission to probe 9/11" http://www.cbsnews.com/stories/2002/05/15/attack/main509096.shtml http://www.wanttoknow.info/020204newsweek "President Bush took a few minutes during his trip to Europe Thursday to voice his opposition to establishing a special commission to probe how the government dealt with terror warnings before Sept. 11." "Cheney was calling to pre-emptively protest public hearings by other committees. If the Democrats insisted, Bush administration officials might say they're too busy running the war on terrorism to show up. Press the issue, Cheney implied, and you risk being accused of interfering with the mission." You Can Make a Difference Is it possible that there is more to these bombings that we are being led to believe? We have a wealth of information from highly reliable sources suggesting that, at the very least, 9/11 was consciously allowed to happen by certain elements within the government in order to forward the war agenda. You can find this revealing information with links to all original sources in our 9/11 Information Center at http://www.wanttoknow.info/911information Please help to play the role at which our press is so sadly failing by spreading this important news to your friends and colleagues. By opening our eyes to what is going on and educating ourselves and those around us, we can and will build a critical mass which will force major changes and bring us back to a full democracy of the people, by the people, and for the people. Thanks for caring and you have a good day. With best wishes, Fred Burks for the [52]WantToKnow.info Team P.S. Important pieces of the above information were obtained from an excellent blog run by a colleague and friend of mine at: [53]http://georgewashington.blogspot.com See our archive of cover-up news articles at [54]http://www.WantToKnow.info/coverupnews Your donations, however large or small, help greatly to support this important work. To make a secure donation: [55]http://order.kagi.com/cgi-bin/store.cgi?storeID=6CYJA Explore these empowering websites coordinated by website founder Fred Burks: [56]http://www.momentoflove.org - Every person in the world has a heart [57]http://www.WantToKnow.info - Revealing major cover-ups & working together for the good of all [58]http://www.gcforall.org - Building a Global Community for All [59]http://www.weboflove.org - Strengthening the Web of Love that interconnects us all Together, we are building a better world based on love and cooperation -- To subscribe to or unsubscribe from the WantToKnow.info email list (avg. one every two or three days), send an email to [60]wecare at wanttoknow.info with "subscribe deep" or "unsubscribe deep" in the subject line. London Bombing Cover-up References 52. http://www.wanttoknow.info/aboutus 53. http://georgewashington.blogspot.com/ 54. http://www.wanttoknow.info/coverupnews 55. http://order.kagi.com/cgi-bin/store.cgi?storeID=6CYJA 56. http://www.momentoflove.org/ 57. http://www.WantToKnow.info/ 58. http://www.gcforall.org/ 59. http://www.weboflove.org/ 60. mailto:wecare at wanttoknow.info 61. javascript:mailpage() 62. javascript:window.print() 63. javascript:addToFavorites() 64. http://www.WantToKnow.info/050713londonbombingcoverup#top 65. http://www.wanttoknow.info/ 66. http://www.wanttoknow.info/spreadtheword 67. http://www.wanttoknow.info/resources 68. http://www.wanttoknow.info/coverupnews 69. http://www.wanttoknow.info/forum 70. http://www.wanttoknow.info/youcanhelp 71. http://www.wanttoknow.info/aboutus 72. mailto:wecare at wanttoknow.info 73. http://order.kagi.com/?6CYJA&lang=en From checker at panix.com Fri Jul 15 19:32:36 2005 From: checker at panix.com (Premise Checker) Date: Fri, 15 Jul 2005 15:32:36 -0400 (EDT) Subject: [Paleopsych] NS: Email forwarding amounts to ritual gift exchange Message-ID: Email forwarding amounts to ritual gift exchange http://www.newscientist.com/article.ns?id=dn7662&print=true [Thanks to Laird for this.] * 17:26 12 July 2005 * Will Knight Forwarding a quirky email or an amusing link or video attachment to colleagues may seem innocent enough, but it is the modern equivalent of ritual gift exchange and carries with it similar social implications, say US researchers. Email forwarding is a familiar part of modern email communications, and has spawned many an internet phenomenon, the [12]Star Wars kid, the [13]Numa Numa dance, and [14]Oolong the rabbit to name just a few. Benjamin Gross at the University of Illinois, US, and colleagues studied email forwarding behaviour by conducting informal interviews among email users. He says forwarding emails plays a vital role in constructing and maintaining modern social ties, despite the phenomenon receiving scant attention from social scientists. Forwarding a genuinely amusing or interesting link to a friend, for example, shows that you are thinking of them and are aware of the sort of content they like, Gross says. But passing an irrelevant or out-of-date link on to contacts can be annoying, thus lowering the sender's social status in the recipients eyes. Viral marketing "If they are consistently wrong about what content is of actual interest to recipients their reputation may drop in the implicit system people must apply in order to [prioritise] their email," Gross writes in a paper co-authored with Jeff Ubois at the University of California, Berkley, and Marc Smith at Microsoft Research in Redmond, both in the US. The power of email-mediated social networks has, of course, already been identified by marketing firms, who often try to exploit them through "viral" marketing campaigns. This involves creating a video clip or website that includes an advertising message and hoping that it gets passed on via email to thousands of internet users. Gross says email-forwarding networks could prove useful in other ways. He points to a software project called Forward Track, which can monitor email forwarding chains, making it possible for political groups to keep track of those who have forwarded a political message to friends. Microsoft has also developed software to map the networks created through email forwarding. A prototype program called Social Network and Relationship Finder, or SNARF, can be used to create a picture of the social and business networks constructed through email communications. The researchers will present their paper at the Second Conference on Email and Anti-Spam in California from 21 July. Related Articles * [15]Teamwork will beat the spammers * [16]http://www.newscientist.com/article.ns?id=mg18624996.700 * 12 May 2005 * [17]A new game for Kevin Bacon to play * [18]http://www.newscientist.com/article.ns?id=mg17523512.700 * 13 July 2002 * [19]Small world networks key to memory * [20]http://www.newscientist.com/article.ns?id=dn5012 * 26 May 2004 Weblinks * [21]Second Conference on Email and Anti-Spam * [22]http://www.ceas.cc/ * [23]Forward Track * [24]http://forwardtrack.eyebeamresearch.org/ * [25]Community Technologies, Microsoft Research * [26]http://research.microsoft.com/community/ * [27]Microsoft Research Cambridge * [28]http://research.microsoft.com/aboutmsr/labs/cambridge/ References 15. http://www.newscientist.com/article.ns?id=mg18624996.700 16. http://www.newscientist.com/article.ns?id=mg18624996.700 17. http://www.newscientist.com/article.ns?id=mg17523512.700 18. http://www.newscientist.com/article.ns?id=mg17523512.700 19. http://www.newscientist.com/article.ns?id=dn5012 20. http://www.newscientist.com/article.ns?id=dn5012 21. http://www.ceas.cc/ 22. http://www.ceas.cc/ 23. http://forwardtrack.eyebeamresearch.org/ 24. http://forwardtrack.eyebeamresearch.org/ 25. http://research.microsoft.com/community/ 26. http://research.microsoft.com/community/ 27. http://research.microsoft.com/aboutmsr/labs/cambridge/ 28. http://research.microsoft.com/aboutmsr/labs/cambridge/ E-mail me if you have problems getting the referenced articles. From checker at panix.com Fri Jul 15 19:32:42 2005 From: checker at panix.com (Premise Checker) Date: Fri, 15 Jul 2005 15:32:42 -0400 (EDT) Subject: [Paleopsych] allAfrica.com: Our group mentality Message-ID: Our group mentality http://allafrica.com/stories/printable/200507130817.html [Thanks to Laird for this.] Daily Trust (Abuja) OPINION July 13, 2005 Posted to the web July 13, 2005 By Professor Layi Erinosho The technologically developed countries are highly individualistic. Emphasis is on the individual and this is carried almost to a ridiculous extent at least in the eyes of those in Nigeria. The rights of the individual whether an adult or infant are respected in the West. You cannot just scold or spank your child in the United States. Your child could bring a report to his/her school authorities who would in turn refer the matter to school counsellors who would in turn recommend psychiatric assistance and/or criminal prosecution. Chil-dren enjoy their fundamental human rights and their par-ents cannot just order them around without seeking their permission. Better still you may have to give something to your child for working for you in North America. Similarly, children who are eighteen years old always insist on paying something as rent to their parents if they still live at home with them. This is because a child who is eighteen and above is expected to leave home and fend for him or herself. This is unthinkable in our society. Students enjoy rights and as such their teachers are very careful in their dealings with them otherwise they would pay dearly. Teachers now take out insurance coverage nowadays in the United States just in case their students seek legal redress over the infringement of their rights. Doctors, lawyers, and parishioners name it are all conscious of their rights. It has been suggested that medicine is patients-driven in the United States due to the obsession of patients with their rights and ethical practice. Rights are extended to homes. Partners nowadays negotiate the conditions governing their marriage well ahead and stick to them at all times. The violation of the terms of agreement consti-tutes a sound legal ground for separation and divorce. Thus, a husband cannot take his wife for granted. He cannot just spank or insist on making love to her at anytime. Her rights must be respected and the decision to mate will be mutual. I had two profe-ssors who were husband and wife in my department during my graduate studies in Tor-onto more than thirty years ago. Each of them had pers-onal telephone landlines then and you dared not try to reach one of them through the other's telephone line!!! Animals are accorded rights and must be treated humanely in western soci-eties. There are animal rights campaigners and efforts are made to protect them and handle them with care. The ethics committees of the research institutions enforce the guidelines for handling animals in laboratories. Wes-tern societies are therefore obsessed with rights and this is traceable to their history and culture. Consequently, Africans had better know about democracy and rights in the context of the historical antecedents of western society. Nigeria is not exactly unlike western societies because group rights trans-cend those of the individuals. Ours is group-oriented soc-iety. The group is more impo-rtant than the individual. We must carry our group along all the time. First, we cannot marry unless we bring the group (i.e., the family) into the picture. The members of the family unit are expected to play prominent role at every stage of betrothal, - introdu-ction, engagement and wedding. Even the wedding ceremony is not dominated by the friends of the groom and bride but by their parents and the long time friends. It is always as if the parents are the ones getting married and not the young couples. We are our brothers' and sisters' keeper. Therefore our homes are open all the time to the members of our exte-nded family and friends. Anyone can show up at our homes at any time and expect to be warmly received and housed for as many days as possible. It is improper to insinuate that such august visitors are violating your privacy. Husbands enjoy unlim-ited control over their wives. They can order their wives around to do anything with the covering support of their extended family. Conseq-uently, a wife is least likely to secure the support of her family or the society at large if she is crazy about enfor-cing her fundamental human rights. Everyone around will simply say to her: respect her husband. She will be treated to long lectures on why the man is the head of the family and is always rights. Of course, our children are in perhaps worse situa-tion. You are always a child of your parents even if you are seventy years of age in Nigeria!!! This means that your parents can still exercise considerable control over you at any age as long as they are still alive. They can order you around or insist that you do things in their own away while you dare not disobey them because this is un-African. A child can live with his/her parents as long as possible and it will amount to a gratuitous insult for him/her to insist on paying rent to them. These open and welcoming attitudes are indicative of parents' sense of responsibility and love for their children. It is therefore unthinkable for children to disobey their parents or elders in our societies. Animals do not have right at all in Nigeria. Pets like dogs or goats or fowls/chic-kens are treated anyhow by their owners. In fact, moto-rists deliberately 'gun' for dogs that stray onto the street because it is believed to be a symbolic ritual for warding off accidents. Perh-aps, the only creature that is respected are ducks which are avoided by motorists at all cost. It is believed that overrunning a duck is an invitation to fatal accidents. The fact that we are our brothers' and sisters' keeper is beneficial to everyone. We attempt to look after one another. Ceremonies are never devoid of support from numerous kin and friends. It means spending a lot to host far too many people that can afford any time. Naming, burial, and wedding ceremonies are always well attended and gifts are showered on the celebrants. It is our duty to support one another all the time even at the risk of making life difficult for ourselves. But some have found that our commitment to group can be oppressive. Indeed, it has been argued that Nigerians are prisoners of their group(s). Group dominance therefore undermines the capacity for indivi-dual creativity and initiatives. Sometime we are wont to follow the group blindly to our peril. For example, thou-sands of Nigerians struggled to deposit their funds in dubious finance houses because their kin and friends did so. All of them lost their hard earned monies. The fact that we are a group society means that we have to struggle to show appreciation for western values which underscore individual rights. Nigerians cannot really understand this and it will take some time before we can jettison group rights for individual rights. But my piece of advice to Nigerians is to stay put wherever they are or move in the opposite direction if their countrymen and women are running eastwards or northwards or westwards. From checker at panix.com Fri Jul 15 19:32:59 2005 From: checker at panix.com (Premise Checker) Date: Fri, 15 Jul 2005 15:32:59 -0400 (EDT) Subject: [Paleopsych] Tim Bayne and Neil Levy: Amputees By Choice Message-ID: Tim Bayne and Neil Levy: Amputees By Choice: Body Integrity Identity Disorder and the Ethics of Amputation Journal of Applied Philosophy, Vol. 22, No. 1, 2005 with an afterword by Wesley J. Smith abstract Should surgeons be permitted to amputate healthy limbs if patients request such operations? We argue that if such patients are experiencing significant distress as a consequence of the rare psychological disorder named Body Integrity Identity Disorder (BIID), such operations might be permissible. We examine rival accounts of the origins of the desire for healthy limb amputations and argue that none are as plausible as the BIID hypothesis. We then turn to the moral arguments against such operations, and argue that on the evidence available, none is compelling. BIID sufferers meet reasonable standards for rationality and autonomy: so as long as no other effective treatment for their disorder is available, surgeons ought to be allowed to accede to their requests. ----------------------------- In 1997, a Scottish surgeon by the name of Robert Smith was approached by a man with an unusual request: he wanted his apparently healthy lower left leg amputated. Although details about the case are sketchy, the would-be amputee appears to have desired the amputation on the grounds that his left foot wasn?t part of him--it felt alien. After consultation with psychiatrists, Smith performed the amputation. Two and a half years later, the patient reported that his life had been transformed for the better by the operation [1]. A second patient was also reported as having been satisfied with his amputation [2]. Smith was scheduled to perform further amputations of healthy limbs when the story broke in the media. Predictably, there was a public outcry, and Smith?s hospital instructed him to cease performing such operations. At present, no hospital offers healthy limb amputations. Would-be amputees--or "wannabes", as they refer to themselves--would appear to number in the thousands. They have their own websites, and are the subject of a recent documentary [3]. In this paper, we are concerned with two basic questions. First, what would motivate someone to have an apparently healthy limb amputated? Second, under what conditions is it reasonable for doctors to accede to such requests? We believe that the first question can shed significant light on the second, showing that, on the evidence available today, such amputations may be morally permissible. What is It Like to Be a Wannabe? What motivates someone to desire the amputation of a healthy limb? One possibility is that wannabes suffer from Body Dysmorphic Disorder (BDD), a condition in which the individual believes, incorrectly, that a part of their body is diseased or exceedingly ugly [4]. This belief can be a matter of intense concern for the individual, and is resistant to evidence against it. BDD appears to be closely akin to anorexia nervosa, in that both appear to be monothematic delusions that are sustained by misperceptions of one?s own body [5]. Perhaps wannabes desire amputation in order to rid themselves of a limb that they believe to be diseased or ugly. A second explanation is that wannabes have a sexual attraction to amputees or to being an amputee [6]. On this account, the desire for amputation would stem from apotemnophilia, which is a kind of paraphilia--a psychosexual disorder. Apotemnophiles are sexually attracted to amputees, and sexually excited by the notion that they might become amputees themselves. A third explanation is that there is a mismatch between the wannabe?s experience of their body and the actual structure of their body. On this view there is a mismatch between their body and their body as they experience it--what we might call their phenomenal (or subjective) body. On this view, which is increasingly gaining favour, wannabes suffer from Body Integrity Identity Disorder (BIID), also known as Amputee Identity Disorder (AID) [7]. The BIID account can be developed in different ways depending on the type of bodily representation that is thought to be involved. On the one hand, one could conceive of BIID in terms of a mismatch between the patient?s body and their body schema. The body schema is a representation of one?s body that is used in the automatic regulation of posture and movement [8]. It operates sub-personally and sub-consciously, guiding the parts of one?s body to successful performance of action. The body schema is a dynamic structure, providing a moment-by-moment sense of how one?s body parts are articulated. Mismatches between a person?s body schema and their actual body are not uncommon. Individuals who lose (or have never had) a limb often experience a phantom limb: they feel as though the limb is still there, and in some cases attempt to employ it in order to carry out actions--such as answering the telephone. Whereas the body schema of individuals with phantom limbs includes body parts that they lack, other patients have no body schema for body parts they have. Patients who have undergone deafferentation from the neck down lose any proprioceptive sense of how their limbs are currently positioned, and rely on visual cues to control action [9]. Perhaps wannabes also have a body schema that fails to incorporate the full extent of their bodies. Although we do not want to dismiss this suggestion, the evidence we have to date weighs against this account. As far as we know, wannabes do not exhibit any of the impairments in control of movement that one would expect in a person with a distorted or incomplete body schema. Further, wannabes who have had the amputation they desire seem, as far as we can tell, to be content to use a prosthesis. This suggests that the problem they suffer from is not primarily a conflict between their body and their body schema. A more plausible possibility is that BIID involves a mismatch between the wannabe?s body and their body image. One?s body image is a consciously accessible representation of the general shape and structure of one?s body. The body image is derived from a number of sources, including visual experience, proprioceptive experience, and tactile experience. It structures one?s bodily sensations (aches, pains, tickles, and so on), and forms the basis of one?s beliefs about oneself [10]. Discrepancies between a person?s body and their body image occur in a wide range of cases, known as asomatognosias. Asomatognosia can occur as a result of the loss of proprioception, in post-stroke neglect, and in the context of depersonalisation [11]. In many of these cases the patient in question has become delusional and denies either the existence of the affected limb or their ownership of it. In a condition known as somatoparaphrenia, patients will even ascribe ownership of their limbs to another person [12]. Other forms of asomatognosia concern only the patient?s perception of their body and leave the doxastic component of their body image intact. Oliver Sacks eloquently describes his own experience of this condition: In that instant, that very first encounter, I knew not my leg. It was utterly strange, not-mine, unfamiliar. I gazed upon it with absolute non-recognition [...] The more I gazed at that cylinder of chalk, the more alien and incomprehensible it appeared to me. I could no longer feel it as mine, as part of me. It seemed to bear no relation whatever to me. It was absolutely not-me--and yet, impossibly, it was attached to me--and even more impossibly, continuous with me [13]. Sacks did not become delusional--he knew that the leg in question was his--but he no longer experienced it as his own. Perhaps BIID involves a similar form of non- delusional somatic alienation. If so, then there might be a very real sense in which the limb in question--or at least, the neuronal representation of it--is not healthy. It is also tempting to draw parallels between BIID and the discrepancy between body image and the person?s actual body that characterizes anorexia nervosa and bulimia nervosa [14]. Of course, there are also important differences between these conditions: Whereas the person with anorexia or bulimia fails to (fully) recognize the discrepancy between her body and her body image, the wannabe is all too aware of this discrepancy. None of the three explanations of the desire for amputation that we have outlined attempts to provide complete models of the phenomenon: the BDD model does not attempt to explain why wannabes might regard the limb in question as diseased or ugly; the apotemnophilia model does not attempt to explain why wannabes might be sexually attracted to a conception of themselves as amputees; and the BIID model does not attempt to explain why wannabes might fail to incorporate the limb into their body image. Clearly these models can, at best, provide only a first step in understanding why someone might become a wannabe. Nevertheless, even though these models are incomplete, we can make some progress in evaluating them. A first point to make is that these models may not be exclusive. It could be that there are two or three bases for the desire for amputation, with some patients suffering from BDD, others suffering from a paraphilia, and others suffering from a form of BIID. Some individuals might even suffer from a combination of these disorders. Perhaps, for example, the sexual element is better conceived of as a common, though not inevitable, element of asomatognosia. Sexuality is, after all, an essential ingredient in most people?s sense of identity. Elliott reports that at least one wannabe (who is also a psychologist) characterizes their desire for amputation as indissolubly a matter of sex and identity [15]. Like Gender Identity Disorder, BIID might be importantly sexual without ceasing to be essentially concerned with identity. However, although each of the three models might play some role in accounting for the desire for healthy limb amputation, we can also ask which model best fits most wannabes. The initial media stories and a subsequent BBC documentary, Complete Obsession, identified Robert Smith?s patients as suffering from BDD. However, there seems good reason to doubt whether any of these individuals suffered from BDD, strictly speaking. Neither of the two individuals featured in Complete Obsession appears to find their limbs diseased or ugly. Instead, they feel in some way alienated from them. Further evidence against the BDD hypothesis is provided by recent research by Michael First [16]. First conducted in-depth anonymous interviews with 52 wannabes, nine of whom had either amputated one of their limbs themselves or had enlisted a surgeon to amputate it. Only one of the 52 individuals interviewed cited the ugliness of the limb as a reason for wanting the amputation. What about the suggestion that the desire for amputation stems from apotemphilia? First?s study provides limited grounds for thinking that the desire for amputation might have a sexual basis in some cases. 15% (n = 8) of First?s interviewees cited feelings of sexual arousal as their primary reason for desiring amputation, and 52% cited it as their secondary reason. Further, 87% of his subjects reported being sexually attracted to amputees. Additional support for the apotemenophilia hypothesis stems from the fact that there is a large overlap between the classes of devotees (acrotomophiles: people sexually attracted to amputees), pretenders (people who consciously fake a disability) and wannabes. More than 50% of devotees are also pretenders and wannabes, suggesting a common cause for all three syndromes [17]. Because of this overlap, the data researchers have gathered on devotees may be relevant to the desire for amputation. Devotees are apparently more sexually attracted to the idea of amputation than to amputees themselves. Though many have had sexual relations with amputees, few go on to establish long-term relationships with particular individuals. As Riddle puts it, for the acrotomophile, ?No amputee is the right amputee? [18]. Bruno suggests that this fact is evidence that acrotomophilia essentially involves projection: the wannabe imagines themselves in place of the amputee. Acrotomophilia is apotemnophilia displaced, projected onto others. If apotemnophilia is essentially a body integrity disorder, Bruno seems to think, it could not be displaced so easily. But it seems just as plausible to interpret the acrotomophile?s lack of interest in the individual amputee as evidence that it is a concern with his own body that motivates the devotee. In any case, although First?s study provides some support for thinking that the desire for amputation can have a sexual component in some instances, it offers little support for the paraphilia hypothesis as the best explanation of the disorder. After all, only 15% of wannabes identified sexual arousal as the primary motivation for amputation: this leaves 85% unaccounted for. First?s data provides equivocal support for the third model, on which the desire for amputation derives from the experience of a gulf between one?s actual body and one?s subjective or lived body. The leading primary reason First?s subjects gave for wanting an amputation was to restore them to their true identity (63%, n = 33). Participants said such things as, "I feel like an amputee with natural prostheses--they?re my legs, but I want to get rid of them--they don?t fit my body image", and, "I felt like I was in the wrong body; that I am only complete with both my arm and leg off on the right side." First suggests that this data supports the view that most wannabes suffer from BIID, which he considers akin to Gender Identity Disorder. There is reason for caution here. For one thing, only 37% (n = 19) of First?s participants said that the limb in question felt different in some way, and only 13% (n = 7) said that the limb felt like it was not their own. In addition, we know of no evidence that wannabes suffer from the kinds of sensory and attentional impairments--such as neglect--that tend to accompany, and perhaps underlie, standard forms of asomatognosia. Perhaps the notion of body image that First?s subjects have in mind is closer to that of the self-image of the person who wants cosmetic surgery, say, for breast enlargement. She knows that she has small breasts, but her idealised image of herself is of someone with large breasts. She does not feel comfortable--at home--in her own body. Although more research needs to be done about the nature and aetiology of the desire for amputation of a healthy limb, the foregoing suffices to put us in a position to make an initial foray into the ethical issues raised by such requests. We turn now to an examination of three arguments in favour of performing the requested amputations. Harm Minimization The first and perhaps weakest of the three arguments is familiar from other contexts. Whether wannabes are correct in thinking that their disorder requires surgery or not, we must recognize that a significant proportion of them will persist in their desire for amputation, even in the face of repeated refusals, and will go on to take matters into their own hands. The Internet sites run by wannabes often discuss relatively painless and safe ways of amputating limbs, or damaging them sufficiently to ensure that surgeons have no choice but to amputate. Six of the 52 participants in First?s study had amputated a limb themselves, utilizing dangerous means including a shotgun, a chainsaw and a wood chipper. Other patients have turned to incompetent surgeons after competent doctors refused to treat them. In 1998 a seventy-nine year old man died of gangrene after paying $10,000 for a black-market amputation [19]. Given that many patients will go ahead with amputations in any case, and risk extensive injury or death in doing so, it might be argued that surgeons should accede to the requests, at least of those patients who they (or a competent authority) judge are likely to take matters into their own hands. At least so long as no other treatments are available, surgery might be the least of all evils. This raises familiar practical and ethical issues to do with participation in a practice of which we might disapprove and our inability to confidently distinguish those patients for whom the desire for an amputation might be transient from those who will persist in their demand. Because these issues are familiar and have been extensively treated elsewhere, we will not dwell on them here. Autonomy It is well-entrenched maxim of medical ethics that informed, autonomous desires ought to be given serious weight. An individual?s conception of his or her good should be respected in medical decision-making contexts. Where a wannabe has a long-standing and informed request for amputation, it therefore seems permissible for a surgeon to act on this request. As an analogy, consider the refusal of life-saving treatment on religious grounds. Although such decisions might result in the death of the patient, they are accorded significant weight in the context of medical decision-making. If we ignore the informed and repeated wishes of the Jehovah?s Witness who refuses the blood-transfusion needed to save her life, we fail to respect her as an autonomous moral agent who is living her life according to her conception of the good. If it is permissible (or even obligatory) to respect informed and autonomous rejections of life saving treatment, it is also permissible to act on informed and autonomous requests for the amputation of a healthy limb. Of course, the parallel between the Jehovah?s Witness who refuses life-saving treatment and the wannabe who requests the amputation of a limb is not exact: the first case involves an omission but the second case involves an action. This is a difference, but whether or not it is morally relevant depends on what one makes of the act/ omission distinction. We are doubtful that the distinction can do much moral work in this context, but to make the case for this position would take us too far away from our present concerns. We shall consider two objections to the argument from autonomy. The first is that wannabes are not fully rational, and that therefore their requests should not be regarded as autonomous. As Arthur Caplan put it: ?It?s absolute, utter lunacy to go along with a request to maim somebody?, because there is a real question whether sufferers ?are competent to make a decision when they?re running around saying, "Chop my leg off"? [20]. It is clear that some individuals who might request the amputation of healthy limbs are not rational. Neither the schizophrenic patient who believes that God is telling her to amputate her leg, nor the patient with somatoparaphrenia who attempts to throw his leg out of bed because he thinks it is not his own, is rational. To what extent wannabes are also incompetent depends on what kinds of wannabes they are. There is a prima facie case to be made for thinking that wannabes suffering from BDD are not competent to request surgery. There are grounds for regarding BDD as a monothematic delusion, akin to, say, Capgras? delusion (the delusion that a close relative has been replaced by an impostor) or Cotard?s delusion (the delusion that one is dead). After all, individuals with BDD appear to satisfy the DSM definition of a delusion: they have beliefs that are firmly sustained despite what almost everyone else believes and despite incontrovertible and obvious proof or evidence to the contrary [21]. Of course, the circumscribed and monothematic nature of this delusion problematizes the charge of incompetence. These patients are not globally irrational. One might argue that despite the fact that their beliefs about the affected limb have been arrived at irrationally, their deliberations concerning what to do in the light of these beliefs are rational, and hence ought to be respected. One might draw a parallel between the position of the person who requests amputation as a result of BDD and the person who refuses life-saving treatment on the grounds of strange religious beliefs. One might argue that in both cases the agent has arrived at their beliefs irrationally, but they may have chosen a reasonable course of action given their beliefs. And--so the argument continues--one might argue that competence is undermined only by unreasonable practical reasoning, not by impaired belief-fixation or theoretical reasoning. There is obviously much more that could be said about whether or not individuals with BDD are competent to request surgery, but we will not pursue these issues, for-- as we have already pointed out--First?s data suggest that few wannabes are motivated by the belief that their healthy limb is diseased or exceedingly ugly. Instead, most wannabes appear to have some form of BIID: they appear to be motivated to achieve a fit between their body and their body image. Are wannabes with BIID delusional? We have already suggested that they are not. Although wannabes seem not to experience parts of their body as their own, they do not go on to form the corresponding belief that it is alien. The wannabe with BIID clearly recognizes that the leg is hers: she does not identify it as someone else?s leg, nor does she attempt to throw it out of bed, in the way that patients with somatoparaphrenia sometimes do. One might argue that the wannabe?s response to her somatic alienation demonstrates a form of irrationality. One might think that the rational response to a conflict between one?s subjective experience of embodiment and one?s body would be to change one?s experience of embodiment rather than change the structure of one?s body. The claim is correct but irrelevant: the wannabe?s desire for amputation appears to be born out of an inability to change the way in which she experiences her body. Of course, it may be that some wannabes would rather change their actual body to fit their experienced body than vice-versa. Is someone with such a desire set competent to make a request for amputation? They certainly challenge our notions of autonomy and competency, but it is far from obvious that they ought to be regarded as incompetent. It is important to bear in mind that they have spent many years--perhaps even decades--with a non-standard sense of embodiment. (Most wannabes report having had a feeling of somatic alienation since childhood.) Their experience of themselves has been built around this sense, and to require them to change it is, to some extent, to require them to change who they are. The case is not dissimilar to a situation in which an elderly person, blind from an early age, is suddenly presented with the opportunity to regain her sight. The decision to decline such an offer can be understood as an exercise of rational agency. A useful angle on the question of whether the requests of wannabes could be competent is provided by contrasting wannabes with people who desire cosmetic surgery (where the surgery is not for the treatment of disfigurement). While one can certainly argue on feminist grounds that such people are not fully competent, these arguments have left many people unmoved [22]. We allow individuals to mould their body to an idealized body type, even when we recognize that this body image has been formed under the pressure of non-rational considerations, such as advertising, gender-norms, and the like. If this holds for the individual seeking cosmetic surgery, what reason is there to resist a parallel line of argument for those seeking amputation? Of course, the latter individual is seeking to mould their body to an ideal that few of us aspire to, and one that has been formed under conditions that are far from perfect, but why should these facts cut any moral ice? In fact, one might think that the desire for cosmetic surgery (and gender-reassignment surgery) is more problematic than the desire for amputation. Men who believe that they are really women ?trapped in a man?s body?-- and the overwhelming majority of transsexuals are male-to-female--typically reinforce a stereotyped view of femininity, and contribute, however unwittingly and obliquely, to gender inequality [23]. The essential woman they seek to be is weak and helpless, obsessed by appearance, and so on [24]. There are related feminist grounds (and not only feminist grounds) on which to criticize cosmetic surgery: it reinforces a very unfortunate emphasis on appearance over substance. It is hard to see that the desire for amputation could be criticized upon grounds of these kinds, since it goes against the grain of our culturally endorsed ideals of the body. A second objection to the argument from autonomy is that the wannabe is not in a position to give informed consent to the surgery, for he or she does not--and cannot--know what it is like to be an amputee without first becoming an amputee. We think that this objection is weak. First, it is not at all obvious that the wannabe cannot know what it will be like to be an amputee without becoming an amputee. Arguably, there is a sense in which the wannabe already knows that it is like to be an amputee. We might also note that at least some wannabes pretend to be amputees-- they spend their weekends in a wheelchair, and so on. To some degree, it seems that a wannabe can know what it is like to be an amputee. But a more important point to be made here is that the objection appears to set the bar for autonomy too high [25]. Autonomy demands only that one have an adequate understanding of the likely consequences of an action, and one can have a reasonable understanding of what life as an amputee would be like without first becoming an amputee. Arguably, the wannabe is in a better position to appreciate the consequences of the desired surgery than is the person who seeks cosmetic surgery, the would-be surrogate mother, or the person desiring gender reassignment surgery. Therapy A third argument in favour of operating appeals to the therapeutic effects promised by such operations. The argument rests on four premises: (i) wannabes endure serious suffering as a result of their condition; (ii) amputation will--or is likely to--secure relief from this suffering; (iii) this relief cannot be secured by less drastic means; (iv) securing relief from this suffering is worth the cost of amputation. This argument parallels the justification for conventional amputations. There is some reason to endorse (i). First, the lengths to which wannabes go in an effort to amputate their own limbs suggest that their desires are strong and unrelenting. Even when wannabes do not take active steps to secure an amputation, their feeling of bodily alienation seems to cause severe disruption to their everyday lives. 44% of First?s subjects reported that their desire interfered with social functioning, occupational functioning, or leisure activities. Some writers suggest that (ii) is problematic. Bruno and Riddle claim that the desire for amputation has its origins in attention-seeking sparked by the deprivation of parental love [26]. On this hypothesis, though it is possible that satisfying their wish for an amputation might give the wannabe the attention and kindness they seek, it is unlikely. Though amputees are treated with a certain degree of solicitude in many situations, the daily frustrations and difficulties caused by their condition almost certainly more than overbalance this care. Moreover, it is quite likely that the wannabe will not be satisfied with the solicitude of strangers. Instead she will seek ongoing commitment from particular individuals, and there is little reason to think that she is more likely to get this than are non-amputees. Finally, it might be that even the love of particular others will not suffice: it may be that literally nothing can stand in for the love of which she was deprived as a child. Bruno suggests that psychotherapy is the appropriate response to the disorder, not surgery. The patient needs to develop insight into the real source of her problems before she can solve them. Bruno?s proposal is empirically testable: we can evaluate whether the desire for amputation responds to psychotherapy, and whether amputation simply leads to the displacement of the patient?s symptoms. What little data we have to date suggests that Bruno is wrong on both counts. We know of no systematic study of the effects of psychotherapy on the desire for amputation, but First?s study suggests that it is not particularly effective. Of the 52 individuals he interviewed, 18 had told their psychotherapist about their desire for amputation, and none reported a reduction in the intensity of the desire following psychotherapy. On the other hand, on the scant evidence available, wannabes who succeed in procuring an amputation seem to experience a significant and lasting increase in wellbeing. Both of Robert Smith?s patients were reported as having been very happy with their operations, and the nine subjects in First?s study who had had an amputation also expressed satisfaction with the results [27]. As far as we can tell, such individuals do not develop the desire for additional amputations (in contrast to individuals who have had cosmetic surgery). Nor, as far as we know, do such patients develop (unwanted) phantom limbs. Of course, it may be that the sample to which researchers have had access is self-selecting: adherents of the BIID account are motivated to come forward to adduce evidence in favour of their theory, while those who have had more unhappy experiences simply lose interest in the debate, or are too depressed to motivate themselves to take any further part. In any case, the sample sizes are too small to be statistically significant. Unfortunately, it is hard to see how it will be possible to collect sufficient data of the required sort. We can of course follow the fortunes of those who have arranged non-medical amputations for themselves, but a controlled study would presumably require medical amputations, and ethical approval for performing such operations is unlikely to be forthcoming without this very data [28]. We turn now to (iii): can the wannabe secure relief from their suffering by less drastic means than amputation? Again, the jury is out on this. First?s study suggests that psychotherapy is not a particularly effective form of treatment, but psychotherapy is not the only alternative to amputation. Some form of cognitive behavioural therapy might prove effective, perhaps in combination with psychotropic drugs. But it might also be that some wannabes cannot be helped by available drugs or talking therapy whatever the aetiology of the disorder. After all, the phantom limb phenomenon is resistant to these forms of treatment. For at least some patients, there may be no treatment available other than amputation. Finally, we turn to (iv): is securing relief from this suffering worth the cost of amputation? This, of course, will depend on the degree of suffering in question and the costs of amputation. We have already noted that there is reason to think that wannabes often experience significant misery from their condition. But what should we say about the costs of amputation? These, of course, will vary from case to case, depending on the financial and social circumstances of the individual, and the nature of the amputation itself. The costs might be offset by the benefits of amputation in some cases but not in others. It is interesting to note that of the two would-be amputees featured in the Complete Obsession documentary, the person seeking amputation of a single leg was given psychiatric approval, while the person seeking to have both her legs amputated was denied psychiatric approval. And of course the costs are not always borne just by the patient; they are often also borne by the patient?s family and by society as a whole. There is ample room here for false consciousness. On the one hand, one can argue that wannabes have an overly rosy image of what life as an amputee involves. And certainly those wannabes who have become amputees have a motivation for thinking that their life is better than it really is. On the other hand, one could also argue that those of us who are able bodied have an overly pessimistic image of the lives of the disabled. As able-bodied individuals, we might be tempted to dwell on the harm that accompanies amputation and minimize what is gained by way of identification. Perhaps we are tempted to think that the effects of the surgery are worse than they are. Repugnance We believe that the arguments canvassed above establish a prima facie case for thinking that wannabes should have access to amputation, at least in those instances in which they suffer from BIID. However, we recognize that many people will continue to find the idea of voluntary amputation of a healthy limb objectionable, even when they acknowledge the force of these arguments. What motivates such reactions? We suspect that much of this hostility derives from the sense of repugnance that is evoked by the idea that a person might wish to rid themselves of an apparently healthy limb. Dennis Canavan, the Scottish member of parliament who campaigned to prevent Robert Smith from carrying out such operations was quoted as saying: "The whole thing is repugnant and legislation needs to be brought in now to outlaw this" [29]. Mr Canavan is surely not alone in having such a reaction. Wannabes evoke an affective response not dissimilar to that evoked by the prospect of kidney sales, bestiality, or various forms of genetic engineering. Even when a limb is severely diseased and must be removed in order to save the patient?s life, the thought of amputation strikes many as distasteful at best. Although they should not be dismissed, we think that such responses should be treated with a great deal of caution. A large number of morally benign practices-- such as masturbation, inter-racial marriage, burial (and cremation) of the dead, organ selling, artificial insemination, tattooing and body piercing--have the ability to elicit disgust responses. Disgust responses can alert us to the possibility that the practices in question might be morally problematic, but they do not seem to be reliable indicators of moral transgression [30]. Indirect Effects We have explored three arguments for allowing self-demand amputation of healthy limbs: the argument from harm minimization, the autonomy argument and the therapeutic argument. We have suggested that these arguments have some force. But even if we are right about that, it does not follow that we ought to allow self-demand amputation of healthy limbs. One might hold that although these arguments are strong, their force is outweighed by reasons for not allowing such surgery. In our view, the strongest such argument concerns the possible effects of legitimising BIID as a disorder. The worry is that giving official sanction to a diagnosis of BIID makes it available as a possible identity for people. To use Ian Hacking?s term, psychiatric categories have a "looping" effect: once in play, people use them to construct their identities, and this in turn reinforces their reality as medical conditions [31]. Arguably, something like this has occurred in the case of Dissociative Identity Disorder (formerly multiple personality disorder): the explosion of diagnoses of DID might be due in part to the fact that people regard DID as a culturally sanctioned disorder. The very awareness of a disorder can contribute to its proliferation. Could a similar effect occur for BIID? Is it likely that the inclusion of the disorder in the forthcoming DSM-V will generate an explosion of cases on the order of that seen in the study of dissociation? Perhaps, but there is reason to think that such fears are unwarranted. The desire for amputation of a healthy limb is at odds with current conceptions of the ideal body image. The preference for bodily integrity is deep-seated in normal human beings, and advertising does much to reinforce such norms. We therefore think it unlikely that the desire for amputation will proliferate. Conclusion In a world in which many are born without limbs, or lose their limbs to poisons, landmines, and other acts of man and God, it might seem obscene to legitimise the desire for the amputation of healthy limbs. But we have argued that, in the case of at least some wannabes, the limb in question is not as healthy as it might appear: in an important sense, a limb that is not experienced as one?s own is not in fact one?s own. Disorders of depersonalisation are invisible to the outside world: they are not observable from the third-person perspective in the way that most other disorders are. But the fact that they are inaccessible should not lead us to dismiss the suffering they might cause. Whether amputation is an appropriate response to this suffering is a difficult question, but we believe that in some cases it might be [32]. Tim Bayne Department of Philosophy Macquarie University Sydney, NSW 2109 Australia tbayne at scmp.mq.edu.au Neil Levy Centre for Applied Philosophy and Public Ethics Department of Philosophy University of Melbourne Parkville Vic 3010 Australia nllevy at unimelb.edu.au NOTES [1] K. Scott (2000) Voluntary amputee ran disability site. The Guardian, February 7. [2] G. Furth and R. Smith (2002) Amputee Identity Disorder: Information, Questions, Answers, and Recommendations about Self-Demand Amputation (Bloomington, IN. 1st Books). [3] M. Gilbert (2003) Whole U.S.A. [4] K. Phillips (1996) The Broken Mirror: Understanding and Treating Body Dysmorphic Disorder (Oxford, Oxford University Press). [5] D. M. Garner (2002) Body image and anorexia nervosa in T. F. Cash & T. Pruzinsky (eds) Body Image: A Handbook of Theory, Research, and Clinical Practice (New York, The Guilford Press). [6] J. Money, R. Jobaris, and G. Furth (1977) Apotemnophilia: Two cases of self-demand amputation as paraphilia, The Journal of Sex Research, 13, 2, 115-125. [7] Furth & Smith op. cit. [8] The term ?body schema? is used in different ways by different authors. We are following Shaun Gallagher?s usage. See S. Gallagher (1995) Body schema and intentionality in J. Berm?dez, N. Eilan and J. Marcel (eds) The Body and the Self. (Cambridge MA, M.I.T. Press) pp. 225-44 and S. Gallagher (2001) Dimensions of embodiment: Body image and body schema in medical contexts in S. K. Toombs (ed) Handbook of Phenomenology and Medicine (Dordrecht, Kluwer Academic Publishers) pp. 147-75. [9] S. Gallagher, and J. Cole 1995 Body schema and body image in a deafferented subject, Journal of Mind and Behavior, 16, 369-90. [10] The term ?body image? is also used in different ways by different authors. Again, we follow Shaun Gallagher?s usage of the term. See reference [8]. [11] T. E. Feinberg, L. D. Haber, and N. E. Leeds (1990) Verbal asomatognosia, Neurology, 40, 1391-4; J. A. M. Frederiks (1985) Disorders of the body schema. In Clinical Neuropsychology in J. A. M. Frederiks (ed) Handbook of Clinical Neurology, rev. Series, No. 1 (Amsterdam, Elsevier); M. Sierra and G. E. Berrios (2001) The phenomenological stability of depersonalization: Comparing the old with the new, The Journal of Nervous and Mental Disorders, 189, 629-636. [12] An account of such a case is described in O. Sacks (1985) The man who fell out of bed, in The Man who Mistook his Wife for a Hat (New York, Touchstone). [13] O. Sacks (1991) A Leg to Stand On (London, Picador). [14] R. M. Gardner and C. Moncrieff (1988) Body image distortion in anorexics as a non-sensory phenomenon: A signal detection approach, Journal of Clinical Psychology, 44, 101-107 and T. F. Cash and T. A. Brown (1987) Body image in anorexia nervosa and bulimia nervosa: A Review of the literature, Behavior Modification, 11, 487-521. [15] C. Elliott (2003) Better Than Well: American Medicine Meets the American Dream (New York, W.W. Norton & Company). [16] M. B. First (unpublished) Desire for amputation of a limb: Paraphilia, psychosis, or a new type of identity disorder? Submitted. [17] R. Bruno (1997) Devotees, pretenders and wannabes: Two cases of factitious disability disorder, Journal of Sexuality and Disability, 15, 243-260. [18] G. C. Riddle (1988) Amputees and devotees: Made for each other? ( New York, Irvington Publishers). [19] C. E. Elliott (2000) A new way to be mad, The Atlantic Monthly, 286, 6, December. [20] Quoted in R. Dotinga (2000) Out on a limb, Salon, August 29, 1. [21] American Psychiatric Association (2000) Diagnostic and Statistical Manual of Mental Disorders, Text Revision. Fourth Edition (Washington D.C., American Psychiatric Association). [22] For a feminist argument against the permissibility of cosmetic surgery see K. P. Morgan (1991) Women and the knife: cosmetic surgery and the colonization of women?s bodies, Hypatia 6, 3, 25-53. [23] H. Bower (2001) The gender identity disorder in the DSM-IV classification: a critical evaluation, Australian and New Zealand Journal of Psychiatry, 35, 1-8. [24] M. Garber (1993) Vested Interests: Cross-Dressing & Cultural Anxiety (London, Penguin). [25] See J. Oakley (1992) Altruistic surrogacy and informed consent, Bioethics, 6, 4, 269-287. [26] Bruno op. cit. and Riddle op. cit. [27] See also Elliott (2000) and (2003) op. cit. and F. Horn (2003) A life for a limb: body integrity identity disorder, Social Work Today, Feb 24. [28] R. Smithand and K. Fisher (2003) Healthy limb amputation: ethical and legal aspects (letter), Clinical Medicine, 3, 2, March/April, 188. [29] Quoted in Dotinga op. cit. [30] See J. R. Richards (1996) Nefarious goings on, The Journal of Medicine and Philosophy, 21, 375-416. [31] I. Hacking (1995) Rewriting the Soul: Multiple Personality and the Sciences of Memory (Princeton, Princeton University Press). [32] We are very grateful to Shaun Gallagher, Jonathan Cole, Michael First and an anonymous reviewer for their very useful comments on a previous version of this paper. We also thank Suzy Bliss for her valuable help. Wesley J. Smith: Should Doctors Be Allowed To Amputate Healthy Limbs? The Weekly Newsletter of the Center for Bioethics and Culture Network http://www.cbc-network.org/enewsletter/index_7_13_05.htm#article1 If you want to see why Western culture is going badly off the rails, just read the drivel that passes for learned discourse in many of our professional journals. The most recent example is Amputees by Choice: Body Integrity Identity Disorder and the Ethics of Amputation, published in the current issue of the Journal of Applied Philosophy (Vol. 22, No 1, 2005). The question posed by the authors, Tim Bayne and Neil Levy, both Australian philosophy professors, is whether physicians should be permitted to amputate a patients healthy limb because the patient is obsessed with becoming an amputee, an apparently newly discovered mental disorder that has been given the name Body Integrity Identity Disorder (BIID). For people of common sense, the answer is obvious: NO! First, who but a severely mentally disturbed person would want a healthy leg, arm, hand, or food cut off? Such people need treatment, not amputation. Second, physicians are duty bound to do no harm, that is, they should refuse to provide harmful medical services to patientsno matter how earnestly requested. (Thus, if I were convinced that my appendix was actually a cancerous tumor, that would not justify my doctor acquiescing to my request for an appendectomy.) Finally, once the limb is gone, it is gone for good. Acceding to a request to be mutilated would amount to abandoning the patient. But according to Bayne and Levy, and a minority of other voices in bioethics and medicine, the need to respect personal autonomy is so near-absolute that it should even permit doctors to cut off the healthy limbs of amputee wannabes. After all, the authors write, we allow individuals to mould their body to an idealized body type in plastic surgerya desire that is more problematic than the desire for amputation since cosmetic surgery reinforces a very unfortunate emphasis on appearance over substance. (Emphasis within the text.) Moreover, the authors claim in full post modernist mode, just because a limb is biologically healthy, does not mean that the leg is real. Indeed, they argue, a limb that is not experienced as ones own is not in fact ones own. That this kind of article is published in a respectable philosophical journal tells us how very radical and pathologically non judgmental the bioethics movement is becoming. And lest you believe that such advocacy could never reach the clinical setting: Think again. Such surgeries have already been performed in the United Kingdom with no adverse professional consequence to the amputating physicians. Even more worrying, the current trends in American jurisprudence could one day legalize amputation as treatment for BIID. For example, in 1999, the Montana Supreme Court invalidated a law that required abortions to be performed in hospitals. But rather than limit the decision to that issue, the 6-2 majority opinion in James H. Armstrong, M.D. v. The State of Montana, imposed a radical and audacious medical ethic on the people of Montana, ruling: The Montana Constitution broadly guarantees each individual the right to make medical judgments affecting her or his bodily integrity and health in partnership with a chosen health care provider free from government interference. If indeed almost anything goes medically in Montanaso long as a patient wants it and a health care professional is willing to provide itthen it would seem that a physician could legally amputate a patients healthy limbs upon request to satisfy a neurotic BIID obsession. Award winning author Wesley J. Smith is a senior fellow at the Discovery Institute and a special consultant for the Center for Bioethics and Culture Network. Stem Cell News, Illinois to fund embryonic stem cell research: Illinois to pay 10 million towards stem cell research: Although the monies are relatively insignificant, Illinois has now joined 3 other states in funding embryonic stem cell research, but in this case, without the consent of the state legislature. Governor Rod Blagojevich by executive order along with Comptroller Dan Hynes included an amorphous line item in the state budget for the Illinois Department of Public Health called "for scientific research" which makes no clear statement about stem cells. Patty Schuh, spokeswoman for Senate Minority Leader Frank Watson said "What they did was they snuck $10 million into a budget without being up-front with the public." Robert Gilligan of the Catholic Conference of Illinois also said, "I think it's shameful. I think it's a disgrace that, on July 12, when the Legislature is not in session, he finds $10 million dollars to partially fund something that's morally objectionable to many people." Some suggest it is mostly a political gesture considering it is only a fraction of the proposed monies being spent by the other 3 states, California, New Jersey, and Connecticut. CBC's Vision and Mission: CBC's vision is to equip people to face the challenges of the 21st century, defend the dignity of humankind, and embrace ethical biotechnology for the human good. Our mission is defend human dignity as our societies face issues regarding the taking, making, and faking of human life. How do we accomplish this? We offer opportunities via events like the recent [14]Prop71 roundtable and the [15]Techno Sapiens process to engage people's hearts and minds to think seriously and soberly about the issues that will come to dominate the 21st Century. From this newsletter you can see that even now a crucial debate is taking place over the fate and status of the embryo that has far reaching consequences. The US has an opportunity within it's grasp to make a clear statement about ethical research and CBC is part of informing that statement and giving clear guidance within the debate. And how can CBC put on these events and help inform the debates? Simply, it is through you and your support that we can carry out our mission. [16]download the CBC brochure References 14. http://www.cbc-network.org/redesigned/event_signup.php 15. http://www.cbc-network.org/redesigned/event_display.php?id=118 16. http://www.cbc-network.org/pdfs/cbcbrochure.pdf From checker at panix.com Fri Jul 15 19:33:22 2005 From: checker at panix.com (Premise Checker) Date: Fri, 15 Jul 2005 15:33:22 -0400 (EDT) Subject: [Paleopsych] Gary North: Inside Job: How Nixon Was Taken Down Message-ID: Inside Job: How Nixon Was Taken Down Gary North's REALITY CHECK Issue 453, 5.7.5 [Part 2 appended.] INTRODUCTION People can be misled by deliberately distracting them. This fact is basic to all forms of "magic," meaning prestidigitation. The performer seeks to persuade members of the audience to focus their attention on something peripheral, when the real action lies elsewhere. A skilled performer can do this "as if by magic." The mainstream media use a very similar strategy in crucial events. The public is reminded of the official version of what is a turning-point event. Nothing is said of other aspects of the event. It is assumed that the public will forget. Prior to the Internet, this was a safe assumption. It no longer is. Let me give you an example. We have all heard the analogy of the elephant in the living room. Out of politeness to the host, no guest says anything. If no one says anything for a long enough period, people tend not to notice the beast any more. It becomes background noise (and odor). When they move on to another dinner party, they tend to forget that the elephant was ever there. I am now going to present three videos. They are videos of the largest documented elephant in the largest living room in human history. Nothing else matches it. Some of you may have seen video #3: it was on national TV. Yet after its broadcast, this elephant was dropped down the memory hole. Memory holes are designed to accept elephants. People have heard about this one, but they have long forgotten it. Only by deliberately ignoring it can those in charge of reminding people to think about certain details be confident that the official version of the event will be believed. The event took place eight hours after a more famous event. That event was 9/11. But this event was also part of 9/11. It is the event, more than any other event, that does not fit the official explanation. It fits a few of the unofficial explanations. Watch the video. On my computer, QuickTime is automatically activated. http://shurl.org/elephantvideo "Name that event!" What event was it? Do you remember? Can you identify it by name? Probably not. Don't tell me Orwell was wrong in "Nineteen Eighty- Four." The memory hole strategy works. Now watch a different video of the same event. Think about what you are watching. How can this be true? How can the sequence have taken place? What's wrong with this picture? http://shurl.org/elephantvideo2 Then, for the capper, watch the third version. Listen to the verbal comments. The commentator is Dan Rather. Millions of people saw this video and heard what he said. He never said it again. Neither did any other national media commentator. Watch and listen. There is a movable right-left button at the bottom of the image's screen if you have QuickTime installed. You can rerun the video by using your cursor to move the button to the left. Then click the play button again: >. http://shurl.org/elephantvideo3 All right, for those of you who are still confused at what this is, I'll refresh your memory. This is the collapse of Building 7 of the World Towers complex. It took place at 5:20 p.m., over eight hours after the previous collapses. This is the third largest building in history to collapse, yet it is essentially forgotten by the public. Building 7 was a block away from the first two. http://www.wtc7.net/location.html As you have seen, the building collapsed from the bottom. It fell straight down -- just as North and South towers did. This caught Rather's attention. In a moment of extreme indiscretion, he said this: It's reminiscent of those pictures we've all seen too much on television before, when a building was deliberately destroyed by well-placed dynamite to knock it down. He only said it once. But, because of the Web, we can hear him say it over and over. Why did it collapse? Because it was deliberately demolished. Despite all the chaos of that day, there was time to "pull it." Evidence? http://www.whatreallyhappened.com/cutter.html Recording: http://www.whatreallyhappened.com/PULLIT.mp3 In the midst of that day's chaos,a team of highly skilled professionals was assembled in less than eight hours to demolish the building. How? This time frame in itself is remarkable: hours, not days. How? There were fires on the upper floors. The team clearly had the protection of the authorities. They blew out the foundations of an insured building. By common law, a fire department can legally destroy property to keep a fire from spreading. I ask: "To where?"For more details and an amazing video,click here: http://shurl.org/pullit [You will probably not be able to turn off the first video if you watch the first five minutes. If you're at work, you are now forewarned. The report is from Alex Jones, or as he is known by cognoscenti on the Left, "Rush Limbaugh's Rush Limbaugh."] This is not the sort of thing that the American public wants to hear. So, they do not hear it in the mainstream media. Enough of this information was made public by mainstream media sources so that it's not easy to call this a systematic cover-up. But because people have very short memories, it is not necessary to engineer a complete blackout to cover up something very big --in this case, 47 stories. It is only necessary to ignore certain evidence and refrain from asking certain questions. Like the dog that Sherlock Holmes observed in retrospect -- the dog that did not bark -- so is the question that never gets asked by investigators. Why doesn't it get asked? I am making a simple point. An event seen by millions of people so recently can still be dropped down the memory hole. Embarrassing questions are not aired before the general public. No one asks: "What is that elephant doing here?" It is time to consider a far less visible elephant. Part 1 DEEP THROAT: THE SIDESHOW SOLVED! The identity of Deep Throat is modern journalism's greatest unsolved mystery. It has been said that he may be the most famous anonymous person in U.S. history. This is the assessment of John O'Connor, author of the July, 2005 "Vanity Fair" article, "I'm the Guy they Called Deep Throat." If this really was modern journalism's greatest unsolved mystery, then modern journalists have got way too much time on their hands. Deep Throat. For days after "Vanity Fair"'s story appeared (May 31), the media were filled with Deep Throat stories. "Washington's oldest mystery is solved!" This shows that Washington is still as dumb as a post, and has a newspaper to prove it: "The Washington Post." Deep Throat was a sideshow in 1973, and still is. Deep Throat never had what it took to unseat Richard Nixon. Neither did Woodward and Bernstein. One man did. He remains anonymous. In the initial contacts with Woodward, Deep Throat merely confirmed what W&B had dug up on their own. He was not a supplier of new information until much later. The real supplier of new information never talked with Woodward or Bernstein. They never knew he was the reason why all the President's men sank with the Good Ship R. M. Nixon. He was buried so deeply in the bowels of the government that I call him Deep Sphincter. "FOLLOW THE MONEY" W. Mark Felt was on target when he told Woodward to follow the money. He did historians a great favor by getting this phrase into the English language -- not that most salaried historians are willing to do this. But anyone who is trying to uncover the source of crucial decisions ought to begin with the trail of digits in our era that we call money. Nevertheless, this is only one avenue from the here and now back to square one. The other major trail is the loyalty trail. This procedure is what I have called "follow the oath." When we discover to whom or to what a man has sworn allegiance, we learn a great deal about him. We must also look carefully at the sanctions, both positive and negative, that is imposed to maintain his allegiance. When men keep their mouths shut about a really big secret, there has to be fear in the picture. Men love to brag about the big deals they have been a part of. Eventually, they feel compelled to take credit. W. Mark Felt held back for over three decades, but finally he went public. "Yes, I did it. I'm the one!" It is the cry of the four-year-old on the day care playground: "Look at me!" Call it a Felt need. The man who takes his biggest secret to the grave was a serious player, or at least a serious observer. He who exposes a damaging secret is hailed by the enemies of his victim and is vilified by the victim's supporters. Mr. Felt is now experiencing both traditional responses, which come with the territory. His critics cry: "Disloyalty!" Nixon's enemies cry: "Higher duty!" Different strokes from different folks. But the person who actually made the difference -- the one who brought Nixon down -- says nothing. The press says nothing. The greatest Watergate secret of all remains a secret. TRUNCATED CHAINS OF COMMAND Woodward and Bernstein kept writing stories about the Committee to Re-Elect the President. Nixon's team was not very forward-looking when they chose this name for their organization. Its acronym later became CREEP. (The other possible acronym, CRP, also created PR problems.) I challenge readers to come up with a real-world organization with a negative acronym to match CREEP. CREEP crept on behalf of a man universally regarded by his enemies as a creep. CREEP was perfect for the newspapers. Nevertheless, tracing money into CREEP and back out to one of the burglars was not the same as tracing anything illegal to Nixon. Nixon could always say that he had nothing to do with the minions at CREEP. This is what every senior decision-maker says whenever some unsavory machination hits the headlines. It works most of the time. The minions are either loyal or afraid. When threatened with serious negative sanctions, they may reply: "I was just following orders!" But these unwritten orders always seem to have originated no higher than the rank of staff sergeant or its organizational equivalent. Somehow, with the exception of My Lai, such orders do not originate at the commissioned officer level, and never at the field- grade officer level. There is always a break in the chain of command, usually quite low on the chain. The only exception is when a nation loses a war. The Nuremburg trials followed the orders all the way up. But these post- World War II trials were unique in the history of peacetime. Nixon lost the Watergate war. Yet in the midst of that war, he was in a safe position with respect to CREEP's flow of funds. Here, he knew what he was doing. He was out of the loop. The Democrats had almost succeeded in scuttling him on the payola issue in the 1952 Presidential campaign, and only his deservedly famous "Checkers" speech saved him. Overnight, Checkers became the most famous dog in American political history, the dog that saved Nixon's career. Eisenhower had been prepared to drop Nixon from the ticket, but that speech went to the hearts of Republicans in the heartland. Nixon survived. Never again would he let himself be implicated in wrongdoing by this sign on his desk: "The bucks stop here." Yet in August, 1974, Nixon resigned. How did this happen? THE SMOKING TAPES Two events led to Nixon's removal: one public, one private. The first event was the televised admission by Alexander Butterfield, under questioning by a Republican Senate staff lawyer, that Nixon had bugged the White House. The Secret Service had tape recorded all of Nixon's conversations, beginning in early 1971. By this public admission, he became the most important of all the public players. Butterfield had been Deputy Assistant to the President. He had been recommended by Haldeman. He worked with the Secret Service on security matters. He had been in charge of secretly taping the Cabinet meetings. In late 1972, he had been appointed the head of the Federal Aviation Administration. He remained the head of the FAA after Nixon resigned. The recording system went on and off automatically throughout the Executive Office Building (1) whenever it detected a voice, if (2) the system previously detected Nixon's electronic locator, which the Secret Service made him wear. When he was in a room and someone started speaking, a tape recorder came on. Again, this is according to the official site. It is also what Butterfield told a conference in 2003. A transcript is posted on-line, and it is a fascinating document. http://shurl.org/predecessors This automated system was not the recording system used in the Cabinet Room. There, the system had to be activated manually. Butterfield had been in charge of the manual taping system until he went to the FAA. On July 13, 1973, he told Senate staff committee members about the tapes. He testified in public on July 16, 1973. He was of course asked about the tapes. He admitted everything. Chief of Staff Alexander Haig ordered the Secret Service to remove the system on July 18. Let me check my calendar: testimony on July 16; removal on July 18 . . . lighting-fast thinking by a retired 4-star general! Think about this chronology: The first bug was planted in the Democrats' office on May 28, 1972. The bungled break-in took place on June 17. On August 1, the "Washington Post" reported a $25,000 check, earmarked for the Nixon campaign, that had been deposited in the bank account of one of the burglars. On October 10, the "Post" reported that the FBI had determined that the break-in was part of a campaign of spying conducted by the President's re-election effort. http://shurl.org/waterchron72 The tape recording system was removed on July 18, 1973, at Haig's request, not Nixon's, according to the government's official site for the tapes. http://shurl.org/tapesystem Somehow, it had not occurred to Nixon that the tapes might be incriminating. "Let the good tapes roll!" Men later went to jail because of what was on those tapes. Some of them knew that the tape machine was running when they spoke the words that sent them to jail. Haldeman knew. Others may have known. Yet we are supposed to believe that they never told Nixon, "Turn off the tape recorder." I have my choice of conclusions: (1) Nixon and his assistants simply forgot about the recorders; (2) they thought that no one would gain access to the tapes before the statute of limitations ran out for them, and they cared nothing about future historians' assessments of their personal integrity; (3) Nixon did not have control over the recordings. Most commentators say #2 was the reason: Nixon's desire for accurate records for writing his memoirs. It turns out that recorders had been installed by Eisenhower, Kennedy, and Johnson. (http://shurl.org/predecessors) We have learned that Roosevelt had a primitive recording system installed. Johnson had advised Nixon to start recording his conversations. He told him that he was using tapes to write his memoirs, which were published in 1971. Nixon at first resisted the suggestion, but in early 1971, he asked Butterfield install the system. From the day he had the system installed, he lost control over his Presidency. He was leaving a record of everything he said. Butterfield and others have pointed out that Nixon was incapable of operating any mechanical device. This was why Butterfield had to turn on the recorder in the Cabinet room. This was also why Rose Mary Woods got blamed for the missing 18? minutes. No one close to the President believed that Nixon could have erased it by himself. This means that Nixon from the beginning knew that he would have to have the tapes transcribed by a third party. Whatever was on them, a third party would know. Also, he would have to listen to a staggering number of tapes before getting any section transcribed. In less than three years, there were 3,700 hours of tapes. There would have been over three more years of taping on the day Haig removed the system. In his post-Presidency writing, how could he identify the tape of a specific meeting? By coordinating his appointments calendar with the dates on the tapes. If he could do this, so could the person in charge of the tapes, if he had access to the appointments calendar. The Secret Service controlled the tapes, which were stored in a room under the Oval office. Nixon did not personally control the tapes. (http://shurl.org/predecessors) There was one simple way that he could get away with "I am not a crook": remove all the tape recorders and destroy all the tapes -- assuming there was only one copy. Haig finally pulled the plug. Too late. At that point, destroying the tapes would have been obstruction of justice. On June 18, 1972, it would not have been. Someone was determined to keep those tapes rolling. Nixon did not remove the system; Haig did, on his own authority, the official version says. But, by then, it was legally too late to destroy the tapes. INVESTIGATIVE REPORTING Beginning no later than Nixon's resignation, a competent reporter would have followed more than the money. He would have pursued these questions: Who had something to gain from the tapes? What did he have to gain? Who had the power to leave the tapes running? How did he gain this power? To whom was he loyal? Why? What sanctions were over him? Why did the Nixon's senior staff talk on tape? Why didn't they say: "The tapes go or I do"? What sanctions did they face for quitting? To whom were they loyal? The tapes provided enormous leverage against Nixon. The question is: For whom? And this: Starting when? After Butterfield's testimony, Nixon's opponents had far more leverage than before, but it was still insufficient leverage. They had to get access to all of the tapes, but the courts refused to grant this. Congress was not allowed to go on a fishing expedition. In effect, the prosecutors had to have a warrant issued by the court, meaning Judge Sirica. They had to be able to identify specific discussions related to suspected crimes, not discussions in general. Nixon soon invoked "executive privilege." The courts were unwilling to give carte blanche to the two Watergate committees to turn their staffs loose on those tapes -- not unless the Supreme Court authorized this. The Supreme Court did not do this until after the lower courts and Congress had access to the crucial segments of the tapes. FOLLOW THE NUMBERS We come now to the second event, which was a connected series of events: the heart of the Watergate investigation. This is not the heart of Watergate as such. We still do not know for sure why the Plumbers installed bugs in the office of the Democratic National Committee. We do not know why they came back weeks later. But the most important thing we do not know is the name of the inside man at the White House. There was an inside man. On him, the outcome of the investigation pivoted. Yet I know of only three people who have ever raised this issue in print. I am one of them: third in a row. I first wrote about this in 1987. That was 14 years after the event, or, more accurately, a related series of events. A copy of my brief discussion is on-line. It is a section from the bibliography of my book, "Conspiracy: A Biblical View." (http://shurl.org/conspiracy) I have never been contacted by any historian or any journalist regarding what you are about to read. I sent it to the professor whose journalism students did the famous investigation of Deep Throat a few years ago. They jointly concluded that he was Fred Fielding. The professor never replied. Here is the story that Woodward and Bernstein somehow missed, though it was the central fact -- not Deep Throat's revelations -- in Nixon's defeat and their subsequent fame. Here is what I wrote in the 1996 revised edition of my 1987 book. * * * * * * * * * The Watergate investigation became a media extravaganza that seemed to elevate the reporter's calling to national status. Yet some of the details of the Watergate investigation raise questions that only hard-core conspiracy buffs ever ask. For instance, we all know that Nixon was brought down because of the White House audiotapes. But he refused to give up these tapes in one fell swoop. In fact, not until 1996 were scholars given access to these tapes. Only under specific demands by government prosecutors did Nixon turn over limited sections of those tapes. Gary Allen in 1976 summarized the findings of Susan Huck's February, 1975, article in "American Opinion," the publication of the John Birch Society. Allen wrote in "The Kissinger File" (p. 179): Consider the fantastic detail involved in the requests. On August 14th, [1973] for example, Judge Sirica demanded the "entire segment of tape on the reel identified as 'White House telephone start 5/25/72 (2:00 P.M.) (skipping 8 lines) 6/2:3/72 (2:50 P.M.) (832) complete.'" I don't know what all the identifying numbers mean -- but you have to agree that only somebody very familiar with the tapes would know. These boys knew precisely what to look for! Here is another sample request: January 8, 1973 from 4:05 to 5:34 P.M. (E.O.B.) a) at approximately 10 minutes and 15 seconds into the conversation, a segment lasting 6 minutes and 31 seconds: b) at approximately 67 minutes into the conversation, a segment lasting 11 minutes; c) at approximately 82 minutes and 15 seconds into the conversation, a segment lasting 5 minutes and 31 seconds. Only Susan Huck asked the obvious question: How did the prosecutors know precisely when these incriminating discussions took place? There are only two possible answers: (1) someone with access to the tapes inside the White House was leaking the information; (2) there was a secret back-up set of the tapes in the hands of someone who was leaking the information. Leaked information would have been illegal for prosecutors to use in court, yet this was how they brought Nixon down. To my knowledge, no reporter or professional historian has ever bothered to follow up on this remarkable oddity, or even mention it. Nobody ever asked: "What person was in charge of storing those tapes?" It took one of the least known and most diligent conspiracy historians (Ph.D. in geography) even to mention the problem. Strange? Not at all. Normal, in fact. Such is the nature of history and the writing of history whenever the events in question point to the operation of powerful people whose private interests are advanced by what appear to be honorable public activities that cost a lot of money. http://shurl.org/tapenumbers * * * * * * * * * This is the elephant in the West Wing. This is what no one discussed at the time, let alone now. In Part 2, on Friday, I will discuss this elephant in considerable detail. ----------------- Inside Job: How Nixon Was Taken Down Part 2 Gary North's REALITY CHECK Issue 454, 5.7.10 Part 2 IDENTIFYING THE MOLE There was a mole in the White House. This is the central fact of the Watergate investigation. Without him, Nixon would not have been threatened with impeachment, let alone conviction. This is the issue that no one mentions and no one pursues. It is the elephant in the living room. It has been there for over 30 years. The media's response by now is universal: "What elephant? We don't see any elephant." The Plumbers had broken into the Watergate complex to bug one office. They were called Plumbers because their original job was to plug leaks. Perhaps the greatest irony in American political history is this: the most damaging of all leakers in American history was inside the Nixon White House, and the most significant bugs in history were installed on Nixon's orders. TIMING Notice that Judge Sirica's request came in on August 14, less than a month after the recording system was shut down by Haig. Whoever was leaking the information identified the incriminating passages very fast. There were 3,700 hours of poorly recorded tapes. They were recorded at 15/16 inches per second: the lowest of low fidelity. Think of the mole's task. Reviewing all of the tapes by himself from scratch in time to tip off one of the two committees or Sirica was impossible. He would have had to spend months listening to tapes unless he knew exactly where the passages were. If he did, then this was a long- term spying operation. It did not begin on July 16, 1973. If he had a photocopy of the President's appointments calendar, he could have narrowed down the meetings with key advisors. This would have helped speed up the operation, but not enough to make possible the detailed identifications in less than a month. He would not have turned duplicates over to a Congressional staffer. What could the staffer have done with them, other than to parcel them out to low-level staffers for review? For them to have reviewed all of the tapes, it would have taken a team effort. This would have been risky: too many people in on the deal. Secrets are hard to keep personally, let alone in a group. Copies of the tapes were stolen goods and therefore inadmissible in a court, at least a court that was operating in full public view. The secret had to be maintained. How did he do it? I see only four possibilities: 1. He had been monitoring the conversations and taking notes of what was being said, correlating this information with the tapes. He later reviewed his notes and retrieved the key tapes, identifying the key passages by using a stopwatch. 2. He had been making duplicate copies of all the tapes for months, and then delivered them all at once to someone who had access to a team of oath- bound intelligence community reviewers. 3. He made copies on a high-speed duplicator and delivered them to an outside team of oath-bound intelligence community reviewers. 4. He knew approximately when the incriminating discussions had taken place, and he went back to the specific tapes to time exactly how far into each tape each discussion began and ended. He then turned these numbers over to a Congressional staffer or other intermediary. Option #1 makes this conclusion inescapable: the mole was a Secret Service agent whose full-time job was to record the tapes while listening to them. Options #1 and #2 assume the existence of a long-term strategy: use the tapes against Nixon when the opportunity arose. But what kind of opportunity? How could the mole have predicted Butterfield's testimony to the Senate committee? Was there more to monitoring the tapes than a plan to cooperate with as-yet unassembled authorities? Options #2 and #3 assume the existence of outside teams of reviewers. As for option #4, who would have known which meetings had been crucial? Butterfield left the White House for the Federal Aviation Administration (FAA) in late December, 1972. He had been in charge of taping Cabinet meetings, but Cabinet meetings were not where the Watergate cover-up was discussed. How could he have been the mole? Yet there is no doubt that he knew the mole. He may not have known that the mole had become a mole, but if he didn't suspect what was happening after Sirica's request, 1973, he was either remarkably unobservant or else completely out of the loop. No reporter today asks Butterfield about any of this. The elephant really is invisible to this generation of reporters. For anyone to have made duplicate copies of all the tapes prior to Butterfield's testimony (option #2) would have been an immense undertaking for one man working part- time, i.e., not monitoring the discussions as they took place (option #1). It would have taken months. After Butterfield's testimony, it would have been impossible for a mole to do this by himself. THE GATEKEEPER ISSUE It is possible that a second set of tapes had been made from the very beginning, or at least after the break- in. Whoever had such a set of tapes would have had leverage over the President. But the Secret Service controlled the machines. How did anyone gain access to the tapes without Nixon's authorization? Why would Nixon have given it? Without a team to review the tapes, how could one man have done this on short notice, i.e., after July 16? He would have had to be one of the Secret Service agents who sat in the room to monitor the tapes, assuming that someone did this, even though the tape machines came on automatically. He took notes. But this would mean that he was self-consciously looking for ammunition to be used against Nixon. Why? The main problem with this theory is that other Secret Service agents did not know what was going on in the room where the machines were kept. If there was a full-time agent in that room all day long, there would have been suspicions. In 2003, Butterfield attended a conference on the tapes. He described where the tape machines were located. There was a little thing -- they blasted a hole in the brick wall down underneath the White House and put all this machinery inside a brick wall and then put a cabinet door over it. And I said to [Secret Service security agent Alfred] Wong, I -- this was in the locker room of the protective security [unclear -- microphone problem] Secret Service agents, so when they'd come to work, they had little lockers in there, and they'd change clothes and go home. They didn't stay long in this little room, and I said, "Aren't they going to think this great big panel -- what you call it used to be a brick wall, they're going to question that?" And he said, "No, they probably won't, and if they do, I'll just say, 'We've got something in there,' and they won't ask any questions." And that's true. The Secret Service wouldn't pry or probe at something like that. But there was a hell of a big door in there, and we - - [laughter] and it was a tiny little room anyway, pretty little. http://shurl.org/predecessors Here the equipment resided, and here boxes of tapes were stored. Nobody noticed. "Don't ask. Don't tell." So, there were only a few Secret Service technicians who knew what was inside that little room. These men served as the gatekeepers. Anyone wanting access to the tapes had to get through at least one of these gatekeepers . . unless one of them was the mole. If the tape operator was the mole, he could not have been in that room full-time without creating scuttlebutt and suspicion. This does not rule out the possibility, but it does impose a special burden of proof on the person who chooses this option. This evidence would be difficult to obtain: pay receipts that say "for taping and personally monitoring Nixon's conversations." Alternatively, Butterfield could affirm in writing that one agent was always present in the taping room when Nixon was in the White House. That person was almost certainly the mole. Is there any other possibility of a one-man operation? It is conceivable that someone very high in Nixon's inner circle had access to the tapes after Butterfield's testimony. Haig is one candidate. This is the man Gary Allen thought was the source of the leak. The hard evidence is not there, as far as I can see. But it is not beyond possibility. A mole operating alone had to know approximately when the incriminating discussions had taken place. Only a highly placed person on Nixon's staff could have known this, presumably a participant. He somehow gained private access to these tapes, got out his stop watch, and listened to each tape until he found various smoking guns. Then he told a Congressional staffer what sections to ask for. If this was done by someone on Nixon's staff, it would have been someone who did not incriminate himself on a tape. Some of the highest-placed staff members went to jail or were exposed to the threat of jail. Haldeman knew, yet he kept talking. He was the one senior staff member identified by Butterfield as having known about the tapes. He went to jail. It is unlikely that he was the mole. Let us assume for the sake of argument that no duplicate set of tapes existed prior to July 16. Someone who had detailed knowledge of the tapes was able to review the originals and then pass on this information within weeks. How did he get by the Secret Service? This is the central organizational issue of the Watergate story. I doubt that anyone will pursue this at this late date. But it needs to be pursued if we are ever to get the story even remotely straight. WHO GUARDS THE GUARDIANS? This is one of the oldest questions in political history. I see no alternative to this conclusion: someone who had the cooperation of the Secret Service had access to the tapes. The tapes were stored in a secret room under the Oval Office. Here is Butterfield's account in 2003. Carlin: Outside of you, who knew the system was being used? Butterfield: Well, yeah, it was a deep dark secret, and I want to say no one knew, but the people who actually knew are the president, myself, Bob Haldeman and Larry Higby, Bob's staff assistant -- one of three staff assistants to Bob, Al Wong, who was the Technical Security Division Chief, Al Wong, W-O-N-G, and three technicians who, who put these tapes in: a fellow named Ray Zumwalt, Roy Schwalm, S-C-H-W-A-L-M, and Charles Bretts. They were the technicians, and one of those three changed the tapes when they had to be changed and that sort of thing. He did not indicate that someone was in the taping room full-time. If someone was, and if he was there every day, then he becomes the most likely candidate for the title of lone mole. Otherwise, this had to be a team effort: the mole, plus a team of reviewers. The shorter the time period between Butterfield's testimony and Sirica's first request, the larger the team had to be, or else the more sophisticated the tape- reviewing technology had to be. The team had to find where the key discussions were on the tapes. There were a lot of discussions. MOTIVE Follow the money. Also, follow the oath. Look for a motive. If it's not money, sex, or power, then start looking for revenge. Had I been a reporter, after Nixon's resignation I would have gone looking for a motive -- a motive acknowledged as legitimate by one or more of the tapes' gatekeepers. I would have gone looking for someone with (1) personal connections to the White House Secret Service unit that oversaw the taping and (2) a motive for revenge against Nixon and all the President's men. No reporter did this. Now it is up to historians, who tend to be even more risk-aversive and peer-sensitive than reporters. Don't hold your breath. Somebody got through the gates. He was working with a team. There was insufficient time for one man to review all of the tapes. The question has to be raised: Why would any of the technicians have cooperated with such a team? Why would he have handed over duplicate tapes, plus handed over a photocopy of the President's schedule, to enable the third parties go snooping? There had to be a jointly shared motive. The motive presumably had to do with the oath: loyalty. There was a higher shared loyalty involved, a loyalty to something above Nixon. This could have been the Constitution. In intelligence circles, I don't think this one is high on the list. Loyalties are more personal than Constitutional law. So are sanctions for violating the oath. The secret would remain a secret. There is loyalty owed to oath-bound brothers. There is also loyalty to "cousins" operating under a different but similar oath-bound structure. There is loyalty of professionals against amateurs, of lifetime bureaucrats against temporary politicians. "Loyalty to" always implies "potential disloyalty to." Where there is loyalty, there is always the opportunity for disloyalty. This is why secrecy is so powerful: it offers an opportunity to destroy. Where there is oath-bound loyalty, the temptation for disloyalty increases, especially against those bound by a rival oath. There must be serious sanctions against betrayal. (You do not have to read the century-old works of Georg Simmel to understand these issues, but it helps.) So, the question arises: What team supplied the reviewers? Answer: a group that perceived it's corporate connection with the victims. The victims are easy to identify: the Plumbers. Their connection is easy to identify: the CIA. THE PLUMBERS Who were they? G. Gordon Liddy (ex-FBI) ran the show. Then there were the five burglars: Bernard Baker, Virgilio Gonzalez, Eugenio Martinez, James McCord, and Frank Sturgis. The name of E. Howard Hunt (ex-CIA) appeared in address books carried by two Plumbers. By the time Butterfield testified, all seven were in jail. McCord had been a CIA agent until 1970. Hunt had been a CIA agent until 1970. In March, 1973, McCord wrote to judge Sirica from his jail cell to say that he and the others had been pressured to plead guilty. He singled out John Dean and the former Attorney General, John Mitchell. This set the framework: Nixon vs. the brothers. Nixon left them all to cool their heels in jail. Here is how Hunt described it in a 2004 interview in "Slate." Slate: I still don't understand how you get involved in Watergate later. Through the CIA? Hunt: I had been a consultant to the White House. I greatly respected Nixon. When Chuck Colson [special counsel to Nixon] asked me to work for the administration, I said yes. Colson phoned one day and said, "I have a job you might be interested in." This was before Colson got religion. Slate: How long were you in prison for the Watergate break-in? Hunt: All told, 33 months. Slate: That's a lot of time. Hunt: It's a lot of time. And I've often said, what did I do? Slate: Did you get a pardon? Hunt: No. Never did. I'd applied for one, and there was no action taken, and I thought I'd just humiliate myself if I asked for a pardon. Laura Hunt: He was sort of numb because all of this happened to his wife and his family, his children went into drugs while he was still in prison. Slate: Wasn't your first wife killed in a plane crash? Laura Hunt: She was killed when her plane crash-landed at Chicago's Midway Airport. And there was all this speculation from conspiracy buffs that the FBI blew the plane up or something -- so that she would never talk, all this ridiculous stuff. http://slate.msn.com/id/2107718 Ridiculous stuff? Strange stuff, yes, but in no way was it ridiculous. Dorothy Hunt had been an ex-CIA operative. She had met her husband in the CIA. Her plane went down on December 8, 1972: a United Airlines flight from Washington to Chicago. It crashed at Chicago's Midway Airport. Most of the passengers and all of the crewmembers died. Within a few hours, a team of 50 FBI agents was at the scene, investigating everything. This is no rumor. It was confirmed in a June 11, 1973 letter from acting FBI Director William Ruckelshaus to the Director of the National Transportation Safety Board, who had sent a letter of complaint (six months after the event) to Ruckelshaus regarding the interference of the FBI. Getting a team of 50 FBI agents to a supposed crime scene within hours is so unheard of as to mark any such event as historically unique. Legendary. This was not done by the book. Mrs. Hunt had been carrying a little over $10,000 in cash -- the equivalent of $50,000 today. In his book, "A Piece of Tape," McCord writes that he heard Dorothy Hunt say that her husband had information that would impeach the President. http://shurl.org/crash (Note: I refer to this Web page to provide transcripts of the letters sent by the two Directors, plus the basic chronology of the crash. I do not trust several of the sources cited.) The Hunts had been "present at the creation," when the CIA was known as the OSS. The condition of Hunt would not have not escaped the notice of former colleagues. Nixon had let a team of former national security operatives go to jail for a burglary related to his re-election. It was clear by late 1972 that they were not going to be pardoned. The crash was followed by these peculiar events, which were long forgotten until the Web revived them. 1. The day after the crash, Nixon nominated Egil Krogh, the head of CREEP, as Undersecretary of Transportation. The Department of Transportation is the agency that supervises the National Transportation Safety Board. (He was confirmed in February, resigned in May, and pleaded guilty to supervising Hunt and Liddy in the break-in of Daniel Ellsberg's psychiatrist's office. He went to jail.) 2. Two weeks after Krogh's nomination, Nixon nominated Butterfield as head of the FAA. 3. In January, 1973, Dwight Chapin, Nixon's appointments secretary, resigned. He immediately took a senior-level position with United Airlines in Chicago. (Chapin was convicted in 1974 for lying to the grand jury in 1973 and for offering me a job at the White House in 1971 -- no, scratch that: he only said he MIGHT offer me a job. He never did. He did hire Bob Segretti, who later ran the dirty tricks operation. As a result, they both went to jail.) To imagine that the intelligence community was unaware of the rapid sequence of these aeronautical-related events is to have a vivid imagination. Members of an oath-bound fraternity who believe that several of its members have been taken down by outsiders is a force to be reckoned with. There is loyalty at stake. There is also the matter of self-preservation. There is a well-known strategy for dealing with such threats: tit for tat. Within the intelligence community, there is a degree of cooperation by professionals: those inside vs. those outsiders known as politicians. LONE MOLE OR TEAM EFFORT? The tapes were the Achilles heel of Nixon's attempt to avoid public exposure. John Dean could talk, others could talk, but it was their word against Nixon's . . . unless the prosecutors could use Nixon's words against Nixon. The prosecutors received information regarding the precise location of these words. They received this information because someone inside the White House leaked to investigators working with or for Judge Sirica the IDs of tapes that would condemn Nixon. But the mole could not have obtained this information by himself, unless he had been working on this project almost from the beginning: taking notes and identifying tapes. This raises a key question. If the project began before July 16, how would he have known that Butterfield would tell the Committee about the tapes? Monitoring the information on the tapes made strategic sense if the courts or the committees knew about the existence of the tapes. Otherwise, there was nothing to subpoena. If he did assemble this information over many months while sitting in the taping room, pen in hand, taking notes, then he had another agenda. He was monitoring what was being said for purposes other than cooperating with Sirica, who was not yet in the picture. This raises two questions: (1) Who guards the guardians? (2) To whom do they report? Here is what we know for certain: the information was made available to Sirica within weeks of Butterfield's testimony. To get access to the tapes, someone had to get by the Secret Service and into that room beneath the Oval office. Someone did. The Secret Service is pledged to save the President's life. It is not pledged to save his career. Its agents live in every President's household until he dies, and then they remain with his widow until she dies. We do not call this arrangement what it obviously is: a lifetime monitoring operation. Had I been a Washington reporter in 1974, and had I known of Sirica's specific tape requests, which were a matter of public record, I would have gone looking for a connection between one of the Plumbers and one of the tapes' gatekeepers. One analyst did: Mae Brussell. She was a legendary left-wing conspiracy theorist who saw mysterious connections everywhere. If ever there was a believer in a vast right-wing conspiracy, it was Mae Brussell. She immediately spotted a connection. She wrote in "The Realist" (July, 1973) that the Ervin committee had called the wrong witnesses. Her first example was Al Wong. 1. Wrong witnesses called. Last July, 1972, it was obvious that Al Wong, the Secret Service man who hired James McCord, should be a major witness. When it was disclosed by Alexander Butterfield that the White House was bugged, Al Wong appeared to be holding the tapes. Wong and McCord were close associates. http://shurl.org/connection What was she referring to? What had Wong hired McCord to do? The previous August, also in "The Realist," she had reported on the assignment. James McCord, Jr. held two important jobs at the time of his arrest. He was Chief of Security for the Committee to Re-elect Richard Nixon. With that appointment, McCord was issued his own radio frequency. And that employment was the smaller assignment of the two. The biggest contract a security agent could receive went to McCord Associates, selected by Secret Service agent Al Wong, to provide all security for the republican Convention in Miami. http://shurl.org/brussell She offered no footnote to support this claim, but she surely was on top of this issue from the beginning. Indeed, she was the first journalist to suspect this connection: Wong, McCord, and the tapes. If I were going to write a book on Watergate, I would begin looking for evidence to support her second paragraph. Given McCord's CIA background and his CREEP position, the connection sounds plausible. This does not mean that CIA agents necessarily constituted the reviewing team. It could have been a select group of Secret Service agents, acting on behalf of similarly oath-bound "cousins" or some other group. There is a unique piece of information, reported by Gary Allen in his book on Kissinger and also his book on Nelson Rockefeller. He quotes from "Newsweek" (September 23, 1974). While former white House chief of staff H.R. Haldeman awaits trial for his part in Watergate, the Secret Service chief he ousted from the White House last year has landed a plum job. Robert H. Taylor, 49, who tangled with Haldeman over Nixon security procedures, is now head of the private security forces for all the far-flung Rockefeller family enterprises. (http://shurl.org/taylor) If the mole acted alone in note taking, then he began early. He was alone in that room, but he was not alone with respect to a hierarchy. Government bureaucrats on salaries do not do "extra credit." They get paid to follow orders. Who gave the order? When? Why? These are the kinds of questions that the mainstream media steadfastly refuse to ask. They find it easier to believe in the tape fairy. INADMISSIBLE EVIDENCE Deep Throat confirmed to Woodward that CREEP was where the money flowed into and out of. This was a smoking gun, if modern gunpowder smoked. It was a .22 pistol: CREEP. The story would have come out anyway because of the $25,000 check. Following that money was easy. It was not worth a Pulitzer Prize and a movie. To take Nixon down, there had to be evidence that would stand up in court. This evidence had to have the appearance of being admissible, i.e., not illegally obtained. Yet it was unquestionably illegally obtained. The specificity of the location of the smoking guns on the tapes should have made it clear that the evidence was inadmissible. Yet Judge Sirica -- "Maximum John" -- pretended that it was admissible. He pretended that the tape fairy had delivered the IDs. Every reporter, then and now, has gone along with him. Once the statute of limitations ran out (1980), nobody could prosecute the accused, had there been an accused. There never was. I do not think there ever will be. It was not until July 27, 1974 that the Supreme Court ordered Nixon to turn over all of the tapes. He refused and resigned on August 9. Until the Supreme Court ordered him to deliver all of the tapes, he may have thought he could successfully stonewall his prosecutors. He was wrong. From the day he refused to hand over the specific tapes demanded by Sirica, he was on the defensive. From the day that Sirica started using stolen evidence to hound Nixon, it was only a matter of time. Maximum John was willing to break the law to get him. Congress was willing to break the law to get him. Nixon was doomed. The federal system's checks and balances by 1973 were managed by tax-funded "crooks": law-breakers all. Nixon's resignation under fire created an immediate problem for Republican Congressman John Hammerschmidt of Arkansas, who had recommended that Nixon not resign, and said that Nixon's offense might not be impeachable. He had stood almost alone. His opponent had gone for the jugular: [There is] no question that an admission of making false statements to government officials and interfering with the FBI and the CIA is an impeachable offense. His opponent was Bill Clinton. (http://shurl.org/neverlie) Clinton lost in November. Not many Democrats did, however. Nixon never did turn over all of the tapes. He died in 1994. Only then did the government get all the tapes. The government is still waiting to release to the public the final batch: November, 1972 through July, 1973. http://shurl.org/someday But don't worry. They are on their way. They are being administered by the same experts who took over the administration of the Ark of the Covenant in the final scene of "Raiders of the Lost Ark." UNANSWERED QUESTIONS It is now over three decades after these events. We still do not know why the burglars broke in a second time in June, 1972. We do not know how or why 50 FBI agents showed up at the plane crash site where the ex-CIA wife of the ex-CIA suspected Plumber died in December. We do not know why Nixon left the tape recorders running. We do not know for sure what Nixon did, or was planning to do, to persuade the mole or his oath-bound associates to supply the prosecutors with proof of the smoking guns. We do not know the transmission belt by which the prosecutors were able to identify the precise points on the tapes that sent Nixon's senior staffers to prison and were about to get him impeached. Richard Nixon, in his complete self-confidence, had ordered the tape recorders installed. To use Haldeman's phrase, he not only repeatedly let the toothpaste out of the tube, he left a record of every squeeze. Why? He installed the tape recorders when he did not need them. He, like his presidential predecessors, believed he was going to retain the upper hand, the final say, when it came time for him to write his memoirs. He let famous men speak in his presence, unaware of the tapes. They would speak their minds, he thought, but he would remain clever. As the English say, he was too clever by half. The following exchange took place in 2003: Carlin: Mr. Butterfield, why do you think President Nixon sort of let the machine run? I mean, do you think he sometimes even forgot about the fact that he was taping? Butterfield: Absolutely. Yes. Yeah. We, we marveled at his ability to, uh, seemingly be oblivious to the tapes. I mean, even I was sitting there uncomfortably sometimes saying, "He's not really going say this, is he?" [laughter] But . . . but he did. . . . http://shurl.org/predecessors Nixon was paranoid as no President ever was, either before or after. He was convinced that "they" were out to get him: the liberals, the Jews, the media, the Eastern elite. And he was right. There was a widespread visceral hatred of Nixon that has been matched only by hatred for Hillary Clinton. But in circling the wagons against enemies outside, he forgot Jesus' warning: And a man's foes shall be they of his own household (Matthew 10:36). Nixon received a pardon for crimes never presented in a court of law. He received it from the only President ever chosen by his predecessor rather than by a vote, who in turn appointed as his Vice President a man who had publicly insisted, "I never wanted to be vice president of anything" -- Nelson Rockefeller. This was man who had hired Nixon after his California gubernatorial defeat in 1962, bringing him to New York City to live for free in a condominium owned by Rockefeller, and putting him under the authority of bond lawyer John Mitchell, who later went to jail over Watergate. This was the man who had became Henry Kissinger's patron, who in turn hired Col. Al Haig in 1969, who was a 4-star general just four years later -- skipping the third star altogether. He became Supreme Allied Commander of NATO in 1974, soon after Nixon quit. There were winners and losers in Rockefeller's orbit. The pardon ended the legal issue for Nixon. But, until the day he died, he refused to turn over all of the tapes. His estate finally surrendered the last 201 hours worth of tapes in November, 1996. http://shurl.org/holdout CONCLUSION From the day that the first highly detailed request came from Sirica, Nixon must have known the truth: he was going to become the victim of the biggest leak in American history. From that point on, Nixon knew he had been betrayed from the inside. He knew he was trapped. He was a lawyer. He had made his political career in 1948 based on rolls of film that had been buried in a hollowed-out pumpkin: evidence that Whittaker Chambers had actually forgotten he had regarding Alger Hiss's spying. http://shurl.org/pumpkin What Nixon must have known, no salaried reporter has figured out. Susan Huck did. Gary Allen did. I did. But none of us was ever a full-time reporter. The man who supplied the prosecutors with the technically inadmissible evidence of Nixon's smoking guns may still be alive. He has not broken silence. He has maintained his loyalty. He has kept the oath. If he was watching the evening news in the first week of June, 2005, he must have had a good chuckle. For over three decades, the press played hide-and-seek with the shadow known as Deep Throat. Reporters and authors expended time, energy, and money on tracking him down. At long last, they have found him, senile and unable to tell his story. "Mystery solved! Case closed!" Meanwhile, I hear unerasable voices in my mind. "Good night, David." "Good night, Chet." "And that's the way it was." From checker at panix.com Fri Jul 15 19:35:22 2005 From: checker at panix.com (Premise Checker) Date: Fri, 15 Jul 2005 15:35:22 -0400 (EDT) Subject: [Paleopsych] Paul R. Ehrlich, Simon A. Levin: The Evolution of Norms Message-ID: Paul R. Ehrlich, Simon A. Levin: The Evolution of Norms http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pbio.0030194 PLoS BIOLOGY: a peer-reviewed, open-access journal from the PUBLIC LIBRARY of SCIENCE Volume 3 | Issue 6 | JUNE 2005 [Here it is again, with all the footnotes. Sorry I wasn't paying attention when I sent it two days ago.] Essay Essays articulate a specific perspective on a topic of broad interest to scientists. Paul R. Ehrlich is with the Department of Biological Sciences, Stanford University (Stanford, California, United States of America). Simon A. Levin is with the Department of Ecology and Evolutionary Biology, Princeton University (Princeton, New Jersey, United States of America). *To whom correspondence should be addressed. E-mail: slevin at princeton.edu Published: June 14, 2005 DOI: 10.1371/journal.pbio.0030194 Citation: Ehrlich PR, Levin SA (2005) The Evolution of Norms. PLoS Biol 3(6): e194 ______________________________________________________________________ Over the past century and a half, we have made enormous progress in assembling a coherent picture of genetic evolution--that is, changes in the pools of genetic information possessed by populations, the genetic differentiation of populations (speciation) (see summaries in [1,2]), and the application of that understanding to the physical evolution of Homo sapiens and its forebears ([3]; e.g., [4,5]). But human beings, in addition to being products of biological evolution, are--vastly more than any other organisms--also products of a process of "cultural evolution." Cultural evolution consists of changes in the nongenetic information stored in brains, stories, songs, books, computer disks, and the like. Despite some important first steps, no integrated picture of the process of cultural evolution that has the explanatory power of the theory of genetic evolution has yet emerged. Much of the effort to examine cultural evolution has focused on interactions of the genetic and cultural processes (e.g., [6], see also references in [7]). This focus, however, provides a sometimes misleading perspective, since most of the behavior of our species that is of interest to policy makers is a product of the portion of cultural evolution [8] that occurs so rapidly that genetic change is irrelevant. There is a long-recognized need both to understand the process of human cultural evolution per se and to find ways of altering its course (an operation in which institutions as diverse as schools, prisons, and governments have long been engaged). In a world threatened by weapons of mass destruction and escalating environmental deterioration, the need to change our behavior to avoid a global collapse [9] has become urgent. A clear understanding of how cultural changes interact with individual actions is central to informing democratically and humanely guided efforts to influence cultural evolution. While most of the effort to understand that evolution has come from the social sciences, biologists have also struggled with the issue (e.g., p. 285 of [10], [11-16], and p. 62 of [17]). We argue that biologists and social scientists need one another and must collectively direct more of their attention to understanding how social norms develop and change. Therefore, we offer this review of the challenge in order to emphasize its multidisciplinary dimensions and thereby to recruit a broader mixture of scientists into a more integrated effort to develop a theory of change in social norms--and, eventually, cultural evolution as a whole. What Are the Relevant Units of Culture? Norms (within this paper understood to include conventions or customs) are representative or typical patterns and rules of behavior in a human group [18], often supported by legal or other sanctions. Those sanctions, norms in themselves, have been called "metanorms" when failure to enforce them is punished [17,19,20]. In our (liberal) usage, norms are standard or ideal behaviors "typical" of groups. Whether these indeed represent the average behaviors of individuals in the groups is an open question, and depends on levels of conformity. Conformity or nonconformity with these norms are attributes of individuals, and, of course, heterogeneity in those attributes is important to how norms evolve. Norms and metanorms provide a cultural "stickiness" (p. 10 of [21]) or viscosity that can help sustain adaptive behavior and retard detrimental changes, but that equally can inhibit the introduction and spread of beneficial ones. It is in altering normative attitudes that changes can be implemented. Here, we review the daunting problem of understanding how norms change, discuss some basic issues, argue that progress will depend on the development of a comprehensive quantitative theory of the initiation and spread of norms (and ultimately all elements of culture), and introduce some preliminary models that examine the spread of norms in space or on social networks. Most models of complex systems are meant to extract signal from noise, suppressing extraneous detail and thereby allowing an examination of the influence of the dominant forces that drive the dynamics of pattern and process. To this end, models necessarily introduce some extreme simplifying assumptions. Early attempts to model cultural evolution have searched for parallels of the population genetic models used to analyze genetic evolution. A popular analogy, both tempting and facile, has been that there are cultural analogues of genes, termed "memes" [22,23], which function as replicable cultural units. Memes can be ideas, behaviors, patterns, units of information, and so on. But the differences between genes and memes makes the analogy inappropriate, and "memetics" has not led to real understanding of cultural evolution. Genes are relatively stable, mutating rarely, and those changes that do occur usually result in nonfunctional products. In contrast, memes are extremely mutable, often transforming considerably with each transmission. Among humans, genes can only pass unidirectionally from one generation to the next (vertically), normally through intimate contact. But ideas (or "memes") now regularly pass between individuals distant from each other in space and time, within generations, and even backwards through generations. Through mass media or the Internet, a single individual can influence millions of others within a very short period of time. Individuals have no choice in what genes they incorporate into their store of genetic information, and the storage is permanent. But we are constantly filtering what will be added to our stored cultural information, and our filters even differentiate according to the way the same idea is presented [24,25]. People often deliberately reduce the store of data (for example, when computer disks are erased, old books and reprints discarded, etc.), or do so involuntarily, as when unreinforced names or telephone numbers are dropped from memory. Such qualitative differences, among others, ensure that simple models of cultural evolution based on the analogy to genetic evolution will fail to capture a great deal of the relevant dynamics. A model framework addressed to the specific challenges of cultural evolution is needed. In the models discussed below, the most basic assumption is that the spread (or not) of norms shares important characteristics with epidemic diseases. In particular, as with diseases, norms spread horizontally and obliquely [14], as well as vertically, through infectious transfer mediated by webs of contact and influence. As with infectious diseases, norms may wax and wane, just as the popularity of norms is subject to sudden transitions [3]. On the other hand, there are unique features of cultural transmission not adequately captured by disease models, in particular the issue of "self-constructed" knowledge, which has long been a source of interest, and the development of problem-solving models in psychology ([26, 27]; D. Prentice, personal communication). New syntheses are clearly required. Microscopic Dynamic Substantial progress has been made toward the development of a mathematical theory of cultural transmission, most notably by Cavalli-Sforza and Feldman [14], and Boyd and Richerson [11]. Cavalli-Sforza and Feldman consider the interplay between heritable genetic change and cultural change. This is an important question, addressed to the longer time scale, with a view to understanding the genetic evolution of characteristics that predispose individuals to act in certain ways in specified situations. For many of the phenomena of interest, however, individual behaviors have not evolved specifically within the limited context of a single kind of challenge, but in response to a much more general class of problems. Efforts to provide genetic evolutionary explanations for human decisions today within the narrow contexts in which they occur may be frustrated because generalized responses to evolutionary forces in the distant past have lost optimality, or even adaptive value. Extant human behaviors for example may be the relics of adaptations to conditions in the distant past, when populations were smaller and technology less advanced. Attempts to understand them as adaptive in current contexts may therefore be futile. Thus, we prefer to take the genetic determinants of human behavior (that, for example, we react strongly to visual stimuli) as givens, and to ask rather how those initial conditions shape individual and social learning [3]. Similar efforts have been undertaken by others, such as Henrich and Boyd [28] and Kendal et al. [20]. The sorts of models put forth by Cavalli-Sforza and Feldman, Boyd and Richerson, and others are a beginning towards the examination of a colossal problem. To such approaches, we must add efforts to understand ideation (how an idea for a behavior that becomes a norm gets invented in first place), and filtering (which ideas are accepted and which are rejected). How many ideas just pop up in someone's brain like a mutation? How many are slowly assembled from diverse data in a single mind? How many are the result of group "brainstorming?" How, for example, did an idea like the existence of an afterlife first get generated? Why do ideas spread, and what facilitates or limits that spread? What determines which ideas make it through transmission filters? Why are broadly held norms, like religious observance, most often not universal (why, for instance, has atheism always existed [29,30])? Ideas may be simply stated, or argued for, but transmission does not necessarily entail the reception or adoption of behaviors based on the idea, e.g., [31]. What we accept, and what gets stored in long-term memory, is but a tiny sample of a bombardment of candidate ideas, and understanding the nature and origin of filters is obviously one key to understanding the life spans of ideas and associated behaviors once generated. The Emergence of Higher-Level Structure: Some Simple Models Our filters usually are themselves products of cultural evolution, just as degrees of resistance of organisms to epidemics are products of genetic evolution. Filters include the perceived opinions of others, especially those viewed as members of the same self-defined social group, which collectively attempt to limit deviance [32-34]. "Conformist transmission," defined as the tendency to imitate the most frequent behavior in the population, can help stabilize norms [28] and indeed can be the principal mechanism underlying the endogenous emergence of norms. The robustness of norms can arise either from the slow time scales on which group norms shift, or from the inherent resistance of individuals to changing their opinions. In the simplest exploration of this, Durrett and Levin (unpublished data) have examined the dynamics of the "threshold" voter model, in which individuals change their views if the proportion of neighbors with a different opinion exceeds a specified threshold. Where the threshold is low, individuals are continually changing their opinions, and groups cannot form (Figure 1A). In contrast, at high thresholds, stickiness is high--opinions rarely change--and the system quickly becomes frozen (Figure 1B). Again, groups cannot form. In between, however, at intermediate thresholds (pure conformist transmission), groups form and persist (Figure 1C). In the simplest such models in two dimensions, unanimity of opinions will eventually occur, but only over much longer time periods than those of group formation (see also [20]). When the possibility of innovation (mutation) is introduced in a model that considers linkages among traits and group labels, and where individuals can shift groups when their views deviate from group norms sufficiently, multiple opinions and multiple groups can persist, essentially, indefinitely (Figure 1D). [10.1371_journal.pbio.0030194.g001-M.jpg] Figure 1. (A) Long-term patterning in the dynamics of two opinions for the threshold voter model with a low threshold. (B) Long-term patterning in the dynamics of two opinions for the threshold voter model with a high threshold. Note the existence of small, frozen clusters. (C) Long-term patterning in the dynamics of two opinions for the threshold voter model with an intermediate threshold. Note the clear emergence of group structure. (D) Long-term patterning in a model of social group formation, in which individuals imitate the opinions of others in their (two) groups, and others of similar opinions, and may switch groups when their views deviate from group norms. The formation of groups is the first step in the emergence of normative behavior; the work of Durrett and Levin shows that this can occur endogenously, caused by no more than a combination of ideation and imitation. The existence of a threshold helps to stabilize these groups, and to increase stickiness; furthermore, if threshold variation is permitted within populations, these thresholds can coevolve with group dynamics. What will the consequences be for the size distribution of groups, and for their persistence? Will group stability increase, while average size shrinks? What will be the consequences of allowing different individuals to have different thresholds, or of allowing everyone's thresholds to change with the size of the group? When payoffs reward individuals who adhere to group norms, and when individuals have different thresholds, will those thresholds evolve? The answers to such questions could provide deep insights into the mechanisms underlying the robustness of norms, and are ripe for investigation through such simple and transparent mathematical models. Modeling may also shed light on why some norms (like fashions) change so easily, while others (like foot binding in imperial China) persist over centuries, and more generally on how tastes and practices evolve in societies. Norms in art and music change rapidly and with little apparent effort at persuasion or coercion. But three-quarters of a century of communism barely dented the religious beliefs of many Russians, despite draconian attempts to suppress them [35], and several centuries of science have apparently not affected the belief of a large number of Americans in angels and creationism (e.g., [36,37]). Then there are the near-universal norms, such as the rules against most types of physical assault or theft within groups that, although they vary in their specifics, are interpreted as necessary to preserve functional societies. Group-selection explanations for such phenomena (e.g., [12]) are, we argue, neither justified nor necessary (see also pp. 221-225 of [38], [39]). Such behaviors can emerge from individual-based models, simply involving rewards to individuals who belong to groups. There are degrees: the evolution of cooperation is facilitated by tight interactions, for example when individuals interact primarily with their nearest neighbors [40,41], and the payoffs that come to individuals from such cooperation can enhance the tightness of interactions and the formation of groups. This easily explains why mutually destructive behaviors, like murder, are almost universally proscribed. Group benefits can emerge, and can enhance these effects, but it is neither necessary nor likely that group selection among groups for these behaviors overrides individual selection within groups when these groups are not composed of closely related individuals [42]. Simple models could address such things as the role of contagion in cultural evolution, recognized in one of the first works on psychology [43] in the context of religious revivals and belief, as what has been described as "pious contagion" (p. 10 of [30]). But models must also address issues such as the roles of authority or moral entrepreneurs (individuals engaged in changing a norm) [32], to say nothing of the impacts of advertising and the norm-changing efforts of the entertainment and other industries. In reality, we are intentioned agents who act with purpose. In maturing, we master the norms that have been evolved over a long period, but to which we may adapt in different ways and even (in the case of moral entrepreneurs) strive to change. For a moral entrepreneur, a group that is too small may have little influence and be not worth joining. But large groups may be too difficult to influence, so also may not be worth joining. For such individuals, there is likely an optimal group size, depending on the change the individual wants to effect. Groups also introduce ancillary benefits of membership that change the equation. Such considerations influence decisions such as whether to join a third party effort in a political campaign; understanding the interplay between individual decisions and the dynamics of party sizes is a deeply important and fascinating question, with strong ecological analogies. Groups, collectively, must also wrestle with the costs and benefits of increasing membership, thereby enhancing influence while potentially diminishing consensus and hence the perceived benefits to members. Innovation and Conservatism Cultural evolution, like biological evolution, contains what we like to call the "paradox of viscosity." Evolving organisms must balance the need to change at an appropriate rate in response to varying environmental conditions against the need to maintain a functioning phenome. This trade-off between conservatism and adaptability, between stability and exploration, is one of the central problems in evolutionary theory. For example, how much change can there be in the genes required to maintain adaptation in a caterpillar without lethally affecting the structure and functioning of the butterfly (p. 303 of [44])? Conservatism in religion might be explained by the lack of empirical tests of religious ideas. But even in military technology and tactics, where empirical tests are superabundant, changes are slower than might be expected. For example, the British high command in World War I did not react rapidly to the realities of barbed wire, massed artillery, and machine guns [45]. Even so, the conservatism of the generals may be overrated [46]. Macroscopic Dynamics We have thus far examined the evolution of norms in isolation--as how the views of individuals (and thus the constituents of a pool of nongenetic information) change through time. But everywhere in common discourse and technical literature, it is assumed that norms are bundled into more or less discrete packages we call cultures, and that those packages themselves evolve. Recall that everyday notions such as that American culture of the 1990s was very different from that of the 1960s, that Islamic culture did not undergo the sort of reformation that convulsed Christian culture (for example, [47]), and that Alexander the Great carried Greek culture throughout the Mediterranean and as far east as Persia. The problem of defining "cultures" in cultural evolution seems analogous to that of defining "species" (or other categories) in genetic evolution. There has been a long and largely fruitless argument among taxonomists over the latter [48], and an equally fruitless debate in anthropology (and biology) on the definition of culture [39, 49-57]. Again, we suggest that the parsing of the various influences that create and sustain norms and cultures are ripe for theoretical modeling, but it must begin to incorporate the full richness on multiple scales of space, time, and complexity. Durrett and Levin [3] develop a model integrating the dynamics of clusters of linked opinions and group membership; appropriate extensions would allow group characteristics to evolve as well, but on slower time scales. The oversimplicity of models of symmetric imitation on regular grids, as represented in our simple models, must give way to those that incorporate fitnesses and feedbacks, as well as asymmetries and power brokers, on more complex networks of interaction [58]. Challenges and Hypotheses One of the major challenges for those interested in the evolution of norms is, at the most elementary level, defining a norm. This is related to another general problem of defining exactly what is changing in cultural evolution--which we might call the "meme dilemma" in honor of Dawkins' regrettably infertile notion. A second major challenge is discovering the mechanism(s) by which truly novel ideas and behaviors are generated and spread. A third is discovering the most effective ways of changing norms. We've got a long way to go before being able to meet those challenges. One place to start is to begin formulating hypotheses about the evolution of norms that can be tested with historical data, modeling, or even (in some cases) experiments. Some hypotheses we believe worth testing (and some of which may well be rejected) are given in Box 1. Box 1. Sample Hypotheses about the Evolution of Norms Hypothesis 1. Evolution of technological norms will generally be more rapid than that of ethical norms. Technological changes are generally tested promptly against environmental conditions--a round wheel wins against a hexagonal one every time, and the advantages of adopting it are clear to all. Ethical systems, on the other hand cannot often be tested against one another, and the standards of success are not only generally undetermined, they often vary from observer to observer and are the subject of ongoing controversy among philosophers. Hypothesis 2. In societies with nonreligious art, the evolution of norms in art will be more rapid than those in religion. We hypothesize that art is less important to the average individual than his or her basic system of relating to the world, and conservatism in the latter would be culturally adaptive (leading to success within a culture). Hypothesis 3. Military norms will change more in defeated nations than victorious ones. Was the Maginot Line and the generally disastrous performance of the French army in 1940 an example of a more general rule? Does success generally breed conservatism? Hypothesis 4. The spread of a norm is not independent of the spread of others, but depends on the spread of other norms (norm clusters). Does, for example, empathy decrease with social stratification? Hypothesis 5. Susceptibility to the spread of norms is negatively correlated with level of education. Are the less educated generally more conformist, or does the spread of norms depend almost entirely on the character of the norm? Hypothesis 6. Horizontal transmission will show less stickiness than vertical transmission. This conjecture is based on anecdotal observations that norms like using hula hoops come and go and are primarily horizontally transmitted, and religious values and other high-viscosity points of view are mostly vertically transmitted (p. 129 of [14], [59]). In this essay we have tried to be provocative rather than exhaustive. There is a welter of issues we have not even attempted to address, including: (1) asymmetries of power in the spread of norms, (2) the role of networks, (3) the efficacy of persuasion as opposed to imitation, (4) the cause of thresholds in the change of norms, (5) the genesis of norms during child development, (6) the connection between attitudes and actions, (8) competition among norms from different cultures; and (9) the question, can norms exist "free of people" in institutions? Institutions certainly may emerge as independent structures, stabilized by laws and customs that are enforced to varying degrees through formal punishment or social pressure. Can such norms persist long even when adherence to them is disappearing? The interplay between the dynamics of individual behaviors and normative rules, operating on different time (and other) scales, may be the key, we argue, to understanding sudden phase transitions that can transform the cultural landscape. We hope that, by being provocative, we can interest more evolutionists, behavioral biologists, and ecologists in tackling the daunting but crucial problems of cultural evolution. Few issues in science would seem to be more pressing if civilization is to survive. Acknowledgments We have received helpful critical comments from Kenneth Arrow, John Bonner, Samuel Bowles, Kai Chan, Gretchen Daily, Partha Dasgupta, Adrian deFroment, Anne Ehrlich, Marcus Feldman, Michelle Girvan, Ann Kinzig, Deborah Prentice, and Will Provine. Amy Bordvik provided invaluable assistance in preparing the manuscript for publication. References 1. Ridley M (1996) Evolution. Cambridge (Massachusetts): Blackwell Science. 719 p. 2. Futuyma DJ (1998) Evolutionary biology. Sunderland (Massachusetts): Sinauer Associates. 763 p. 3. Durrett R, Levin SA (2005) Can stable social groups be maintained by homophilous imitation alone? J Econ Behav Organ In press. 4. Klein RG (1999) The human career: Human biological and cultural origins. Chicago (Illinois): University of Chicago Press. 840 p. 5. Cavalli-Sforza LL, Feldman MW (2003) The application of molecular genetic approaches to the study of human evolution. Nat Genet 33. (Suppl) 266-275. 6. Hewlett BS, Silvestri AD, Guglielmino CR (2002) Semes and genes in Africa. Curr Anthropol 43: 313-321. 7. Danchin E, Giraldeau L, Valone T, Wagner R (2004) Public information: From nosy neighbors to cultural evolution. Science 305: 487-491. 8. Ehrlich PR, Feldman MW (2003) Genes and cultures: What creates our behavioral phenome? Curr Anthropol 44: 87-107. 9. Diamond J (2005) Collapse: How societies choose to fail or succeed. New York: Viking. 592 p. 10. Ehrlich PR, Holm RW (1963) The process of evolution. New York: McGraw-Hill. 347 p. 11. Boyd R, Richerson PJ (1985) Culture and the evolutionary process. Chicago (Illinois): University of Chicago Press. 331 p. 12. Wilson DS (2002) Darwin's cathedral: Evolution, religion, and the nature of society. Chicago: University of Chicago Press. 268 p. 13. Cavalli-Sforza LL, Feldman MW (1973) Cultural versus biological inheritance: Phenotypic transmission from parent to children (A theory of the effect of parental phenotypes on children's phenotype). Am J Hum Genet 25: 618-637. 14. Cavalli-Sforza LL, Feldman MW (1981) Cultural transmission and evolution: A quantitative approach. Princeton (New Jersey): Princeton University Press. 388 p. 15. Ornstein R, Ehrlich P (1989) New world new mind: Moving toward conscious evolution. New York: Doubleday. 302 p. 16. Levin SA (1999) Fragile dominion: Complexity and the commons. Reading (Massachusetts): Perseus Books. 250 p. 17. Ehrlich PR (2000) Human natures: Genes, cultures, and the human prospect. Washington (D. C.): Island Press. 531 p. 18. Sumner WG (1911) Folkways: A study of the social importance of usages, manners, customs, mores, and morals. Boston (Massachusetts): Ginn & Co. 692 p. 19. Bowles S, Gintis H (2004) The evolution of strong reciprocity: Cooperation in heterogeneous populations. Theor Popul Biol 65: 17-28. 20. Kendal J, Feldman MW, Aoki K (2005) Cultural coevolution of norm adoption and enforcement when punishers are rewarded or non-punishers are punished. Morrison Work Pap 102: 1-22. 21. Kuper A (1999) Culture: The anthropologists' account. Cambridge (Massachusetts): Harvard University Press. 299 p. 22. Dawkins R (1976) The selfish gene. New York: Oxford University Press. 224 p. 23. Blackmore S (1999) The meme machine. Oxford (United Kingdom): Oxford University Press. 288 p. 24. Tversky A, Kahneman D (1981) The framing of decisions and the psychology of choice. Science 211: 453-458. 25. Tversky A, Kahneman D (1986) Rational choice and the framing of decisions. J Bus 59: S251-S278. 26. Shweder RA (1982) Beyond self-constructed knowledge: The study of culture and morality. Merrill Palmer Q 28: 41-69. 27. Shweder RA (1991) Thinking through cultures: Expeditions in cultural psychology. Cambridge (Massachusetts): Harvard University Press. 404 p. 28. Henrich J, Boyd R (2001) Why people punish defectors: Weak conformist transmission can stabilize costly enforcement of norms in cooperative dilemmas. J Theor Biol 208: 79-89. 29. Collins R (1998) The sociology of philosophies: A global theory of intellectual change. Cambridge (Massachusetts): Belknap Press. 1098 p. 30. Stark R, Finke R (2000) Acts of faith: Explaining the human side of religion. Berkeley (California): University of California Press. 343 p. 31. Rogers EM (1995) Diffusion of innovations. New York: Free Press. 519 p. 32. Becker HS (1963) Outsiders: Studies in the sociology of deviance. London: Free Press of Glencoe. 179 p. 33. Stark R (1996) The rise of Christianity: A sociologist reconsiders history. Princeton (New Jersey): Princeton University Press. 288 p. 34. Adler PA, Adler P, editors (2002) Constructions of deviance: Social power, context, and interaction. Belmont (California): Wadsworth Thomson Learning. 508 p. 35. Greeley AM (1994) A religious revival in Russia. J Sci Study Relig 33: 253-272. 36. Pigliucci M (2002) Denying evolution: Creationism, scientism, and the nature of science. Sunderland: Sinauer Associates. 275 p. 37. Jacobs A (2004) Georgia takes on `evolution' as `monkeys to man' idea. New York Times Sect A: 13. 38. Laland KN, Odling-Smee FJ, Feldman MW, (2000) Group selection: A niche construction perspective. Evolutionary origins of morality: Cross-disciplinary perspectives. In: Katz LD, editor. Bowling Green (Ohio): Imprint Academic. pp 221-225. 39. Palmer CT, Frederickson BE, Tilley CF (1997) Categories and gatherings: Group selection and the mythology of cultural anthropology. Ethol Sociobiol 18: 291-308. 40. Durrett R, Levin SA (1994) Stochastic spatial models: A user's guide to ecological applications. Philos Trans R Soc Lond B Biol Sci 343: 329-350. 41. Nowak MA, Bonhoeffer S, May RM (1994) Spatial games and the maintenance of cooperation. Proc Natl Acad Sci U S A 91: 4877-4881. 42. Wright S (1980) Genic and organismic selection. Evolution 34: 825-843. 43. Shaftesbury AAC (1978 [1711]) Characteristics of men, manners, opinions, times. Hildesheim (Germany): Georg Olms Verlag. 321 p. 44. Ehrlich PR, Hanski I (2004) On the wings of checkerspots: A model system for population biology. Oxford: Oxford University Press. 480 p. 45. Clark A (1965) The donkeys. New York: Award Books. 192 p. 46. Stevenson D (2004) Cataclysm: The first world war as political tragedy. New York: Basic Books. 564 p. 47. Harris S (2004) The end of faith: Religion, terror, and the future of reason. New York: W.W. Norton & Co. 256 p. 48. Ehrlich PR (2005) Twenty-first century systematics and the human predicament. Proceedings Calif Acad Sci 56. Suppl 1 122-140. 49. Kroeber AL, Parsons T (1958) The concepts of culture and of social system. Am Sociol Rev 23: 582-583. 50. Keesing R (1974) Theories of culture. Annu Rev Anthropol 3: 73-97. 51. Moore JT (1974) The culture concept as ideology. American Ethnologist 1: 537-549. 52. Drummond L (1980) The cultural continuum: A theory of intersystems. Man 15: 352-374. 53. Kahn J (1989) Culture, demise or resurrection? Crit Anthropol 9: 5-25. 54. Durham WH (1991) Coevolution: Genes, culture, and human diversity. Stanford (California): Stanford University Press. 629 p. 55. Brightman R (1995) Forget culture: Replacement, transcendence, relexification. Cult Anthropol 10: 509-546. 56. Borofsky R, Barth F, Shweder R, Rodseth L, Stolzenberg N (2001) When: A conversation about culture. Am Anthropol 103: 432-446. 57. Mesoudi A, Whiten A, Laland KN (2004) Is human cultural evolution darwinian? Evidence reviewed from the perspective of the origin of species. Evolution 58: 1-11. 58. Nakamaru M, Levin SA (2004) Spread of two linked social norms on complex interaction networks. J Theor Biol 230: 57-64. 59. Guglielmino CR, Viganotti C, Hewlett B, Cavalli-Sforza L (1995) Cultural variation in Africa: Role of mechanisms of transmission and adaptation. Proc Natl Acad Sci U S A 92: 7585-7589. From checker at panix.com Fri Jul 15 19:35:41 2005 From: checker at panix.com (Premise Checker) Date: Fri, 15 Jul 2005 15:35:41 -0400 (EDT) Subject: [Paleopsych] Michael Behar: How Earth-Scale Engineering Can Save the Planet Message-ID: Michael Behar: How Earth-Scale Engineering Can Save the Planet Popular Science, 5.8 http://www.popsci.com/popsci/aviation/article/0,20967,1075786,00.html Maybe we can have our fossil fuels and burn 'em too. These scientists have come up with a plan to end global warming. One idea: A 600,000-square-mile space mirror David Keith never expected to get a summons from the White House. But in September 2001, officials with the President's Climate Change Technology Program invited him and more than two dozen other scientists to participate in a roundtable discussion called "Response Options to Rapid or Severe Climate Change." While administration officials were insisting in public that there was no firm proof that the planet was warming, they were quietly exploring potential ways to turn down the heat. Most of the world's industrialized nations had already vowed to combat global warming by reining in their emissions of carbon dioxide, the chief "greenhouse gas" blamed for trapping heat in Earth's atmosphere. But in March 2001 President George W. Bush had withdrawn U.S. support for the Kyoto Protocol, the international treaty mandating limits on CO2 emissions, and asked his administration to begin studying other options. Keith, a physicist and economist in the chemical and petroleum engineering department at the University of Calgary, had for more than a decade been investigating strategies to curtail global warming. He and the other scientists at the meeting?including physicists from Lawrence Livermore National Laboratory who had spent a chunk of their careers designing nuclear weapons?had come up with some ideas for "geoengineering" Earth's climate. What they proposed was tinkering on a global scale. "We already are inadvertently changing the climate, so why not advertently try to counterbalance it?" asks retired Lawrence Livermore physicist Michael MacCracken, a former senior scientist at the U.S. Global Change Research Program who helped organize the meeting. "If they had broadcast that meeting live to people in Europe, there would have been riots," Keith says. "Here were the bomb guys from Livermore talking about stuff that strikes most greens as being completely wrong and off-the-wall." But today, a growing number of physicists, oceanographers and climatologists around the world are seriously considering technologies for the deliberate manipulation of Earth's climate. Some advocate planetary air-conditioning devices such as orbiting space mirrors that deflect sunlight away from Earth, or ships that intensify cloud cover to block the sun's rays. Others are suggesting that we capture carbon dioxide?from the air, from cars and power plants?and stash it underground or react it with chemicals that turn it to stone. Carbon dioxide wasn't always public enemy number one. For the past 400,000 years, the concentration of CO2 in the atmosphere has fluctuated between about 180 and 280 ppm (parts per million, the number of CO2 molecules per million molecules of air). But in the late 1800s, when humans set about burning fossil fuels in earnest, atmospheric CO2 began to increase with alarming speed?from about 280 ppm to the current level of almost 380 ppm, in a scant 100 years. Experts predict that CO2 could climb as high as 500 ppm by 2050 and possibly twice that by the end of the century. As CO2 levels continue to rise, the planet will get hotter. "The question now," says Ken Caldeira, an atmospheric scientist at Lawrence Livermore and one of the world's leading authorities on climate change, "is what can we actually do about it?" Here are some of the geoengineering schemes under consideration. 1. Store CO2 Underground Feasibility: 10 Cost: $$ RISK: 4 In the southeastern corner of Saskatchewan, just outside the town of Weyburn?the "Opportunity City"?a steel pipeline descends 4,000 feet below the prairie at the edge of a 70-square-mile oil field. Into this subterranean cavern, petroleum engineers are pumping 5,000 tons of pressurized, liquefied carbon dioxide every day. The aim is twofold: Use high-pressure CO2 to drive oil from the porous rock in the reservoir to the surface, and trap the carbon dioxide underground. Welcome to the world's largest carbon-sequestering operation. Dubbed the Weyburn Project, it began in July 2000 as a partnership between EnCana, a Canadian oil and gas company, and Canada's Petroleum Technology Research Centre. With $13 million in funding from more than a dozen sponsors, including the U.S. Department of Energy, engineers have already socked away six million tons of carbon dioxide, roughly the amount produced by burning half a billion gallons of gasoline. The Timeline Unlike other geoengineering schemes, this one is already happening, with more than half a dozen major projects under way. The problem, says Howard Herzog, a principal research engineer at MIT's Laboratory for Energy and the Environment, is that concentrated CO2 is in short supply. There's too much of the gas floating around in the air, but actually capturing, compressing, and transporting it costs money. In the U.S. and most other nations, there are no laws requiring fossil-fuel-burning power plants?the primary source of CO2 emissions?to capture a single molecule of the gas. The Promise By 2033, the Weyburn Project will store 25 million tons of carbon dioxide. "That's like taking 6.8 million cars off the road for one year," says project manager Mike Monea, "and this is just a pilot test in a small oil reservoir." Saline aquifers, giant pools of saltwater that have been trapped underground for millions of years, could hold even more CO2. Humans dump about 28 gigatons of CO2 into the atmosphere every year. Geologists estimate that underground reservoirs and saline aquifers could store as much as 200,000 gigatons. The Perils Before CO2 is injected into the ground, it's compressed into what's called a supercritical state?it's extremely dense and viscous, and behaves more like a liquid than a gas. In this form, CO2 should remain trapped underground for thousands of years, if not indefinitely. The danger is if engineers accidentally "depressurize" an aquifer while probing for oil or natural gas. There's also a risk that carbon dioxide could escape slowly through natural fissures in subterranean rock and pool up in basements or cellars. "If you walked down into a basement [full of CO2]," Keith says, "you wouldn't smell it or see it, but it would kill you." 2. Filter CO2 from the air Feasibility: 4 Cost: $$$ RISK: 4 Klaus Lackner is accustomed to skeptics. They've doubted him since he first presented his idea for extracting carbon dioxide from ambient air in March 1999, at an international symposium on coal and fuel technology. "The reaction from everyone there was utter disbelief," recalls Lackner, a physicist with the Earth Engineering Center at Columbia University. He called for the construction of giant filters that would act like flypaper, trapping CO2 molecules as they drifted past in the wind. Sodium hydroxide or calcium hydroxide?chemicals that bind with carbon dioxide?would be pumped through the porous filters much the way antifreeze is circulated through a car's radiator. A secondary process would strip the CO2 from the binding chemical. The chemical would recirculate through the filter, while the CO2 would be set aside for disposal. The Timeline Lackner is collaborating with engineer Allen Wright, who founded Global Research Technologies in Tucson, Arizona. Wright is developing a wind-scrubber prototype but remains tight-lipped about the project. He estimates that a completed system is at least two years away. The Promise Wind scrubbers can be placed wherever it's convenient to capture carbon dioxide, so there's no need to transport it. Lackner calculates that a wind scrubber designed to retain 25 tons of CO2 per year?the average amount each American adds to the atmosphere annually?would require a device about the size of a large plasma- screen television. A single industrial-size wind scrubber about 200 feet high and 165 feet wide would snag about 90,000 tons of CO2 a year. The Perils Some experts are dubious about the ease of separating carbon dioxide from the binding chemical, a process that in itself would require energy from fossil fuels. "CO2 is so dilute in the air that to try to scrub from it, you have to pay too much for energy use," Herzog says. And to capture all the carbon dioxide being added to the atmosphere by humans, you'd need to blanket an area at least the size of Arizona with scrubber towers. 3.Fertilize the ocean Feasibility: 10 Cost: $ RISK: 9 On January 5, 2002, Revelle, a research vessel operated by the Scripps Institution of Oceanography, left New Zealand for the Southern Ocean?a belt of frigid, stormy seas that separates Antarctica from the rest of the world. There the scientists dumped almost 6,000 pounds of iron powder overboard and unleashed an armada of instruments to gauge the results. The intent was to test a hypothesis put forth by oceanographer John Martin. At a lecture more than a decade ago, Martin declared: "Give me a half-tanker of iron, and I will give you an ice age." He was alluding to the fact that the Southern Ocean is packed with minerals and nutrients but strangely devoid of sea life. Martin had concluded that the ocean was anemic?containing very little iron, an essential nutrient for plankton growth. Adding iron, Martin believed, would cool the planet by triggering blooms of CO2-consuming plankton. Oceanographer Kenneth Coale, who directs the Moss Landing Marine Laboratories near Monterey, California, was a chief scientist on the Southern Ocean cruise. He says the project was a success, proving that relatively small quantities of iron could spawn colossal blooms of plankton. The Timeline Scientists are wary, saying that too little is known about the deep- ocean environment to endorse further large-scale experiments. In October, Coale and other scientists will gather in New Zealand for a weeklong meeting sponsored by the National Science Foundation, New Zealand's National Institute for Water and Atmosphere, and the International Geosphere-Biosphere Programme to decide how to proceed. The Promise Iron fertilization is by far the cheapest and easiest way to mitigate carbon dioxide. Coale estimates that just one pound of iron could conceivably hatch enough plankton to sequester 100,000 pounds of CO2. "Even if the process is only 1 percent efficient, you just sequestered half a ton of carbon for a dime." The Perils "What is still a mystery," Coale says, "is the ripple effect on the rest of the ocean and the food chain." One fear is that huge plankton blooms, in addition to gorging on CO2, will devour other nutrients. Deep currents carry nutrient-rich water from the Southern Ocean northward to regions where fish rely on the nutrients to survive. Says Coale, "A fertilization event to take care of atmospheric CO2 could have the unintended consequence of turning the oceans sterile. Oops." 4. Turn CO2 to Stone Feasibility: 7 Cost: $$ RISK: 3 The Grand Canyon is one of the largest carbon dioxide repositories on Earth. Hundreds of millions of years ago, a vast sea covered the land there. The water, rich in carbon dioxide, slowly reacted with other chemicals to create calcium carbonate, or limestone?the pinkish bands striping the canyon walls today. Nature's method for turning CO2 to stone is achingly slow, but researchers at the Goldwater Materials Science Laboratory at Arizona State University are working on a way to speed up the process. Michael McKelvy and Andrew Chizmeshya use serpentine or olivine, widely available and inexpensive minerals, as feedstock to fuel a chemical reaction that transforms CO2 into magnesium carbonate, a cousin of limestone. To initiate the reaction?known as "mineral carbonation"?the CO2 is compressed, heated, and mixed with feedstock and a catalyst, such as sodium bicarbonate (baking soda). The Timeline Scaling up the process to handle millions of tons of CO2 would require huge quantities of serpentine or olivine. A single mineral- carbonation plant would carve out a mountain, but, McKelvy says, "You could carbonate [the CO2] and put it right back where the feedstock came from." The Promise Mineral carbonation is simply an accelerated version of a benign natural process. The limestone in the Grand Canyon is 500 feet thick, McKelvy says, "and it has been sitting there not bothering anybody for millennia." The Perils It costs roughly $70 to eliminate one ton of CO2, a price that McKelvy says is too high. Also, the feedstock and CO2 must be heated to high temperatures. "You wind up having to burn fossil fuels in order to provide the energy to activate the mineral to put away the CO2," he says. 5. Enhance Clouds to Reflect Sunlight Feasibility: 6 Cost: $$ RISK: 7 Some proposed solutions to global warming don't involve capturing carbon dioxide. Instead they focus on turning down the heat by deflecting or filtering incoming sunlight. On any given day, marine stratocumulus clouds blanket about one third of the world's oceans, mostly around the tropics. Clouds form when water vapor clings to dust or other particles, creating droplets. Seeding clouds with tiny salt particles would enable more droplets to form?making the clouds whiter and therefore more reflective. According to physicist John Latham, a senior research associate at the National Center for Atmospheric Research in Boulder, Colorado, boosting reflectivity, or albedo, in just 3 percent of marine stratocumulus clouds would reflect enough sunlight to curb global warming. "It would be like a mirror for incoming solar radiation," Latham explains. Latham is collaborating with Stephen Salter, an emeritus professor of engineering design at the University of Edinburgh, who is making sketches for GPS-steered wind- powered boats that would cruise the tropical latitudes, churning up salt spray. "I am planning a flotilla of unmanned yachts sailing backward and forward across the wind," Salter says. "They would drag propellers through the water to generate electricity, which we'd use to make the spray." Salter wants to outfit each boat with four 60-foot-tall Flettner rotors, which look like smokestacks but act like sails. An electric motor starts each rotor spinning, which, along with the wind, creates a pressure differential (less pressure in front of the rotor, more in back), generating forward thrust. From the top of the rotor, an impeller would blast a fine saltwater mist into the air. Until the concept is tested, Salter isn't sure exactly how many ships would be needed to mitigate global warming. "Maybe between 5,000 and 30,000," he says. That may sound like a lot, but Salter notes that for World War II, the U.S. built nearly 100,000 aircraft in 1944 alone. The Timeline Latham initially raised the notion in a 1990 paper. "The article went down like a lead balloon," he says. But early last year in England, at a geoengineering conference hosted by MIT and the Tyndall Centre for Climate Change Research, he presented the concept again. "The consensus was that a number of ideas originally thought to be outlandish were deemed sufficiently plausible to be supported further. Our work fell into that category." Latham needs a few million dollars to test his idea. "On the scale of the damage that will be caused by global warming, that is utterly peanuts." The Promise What's nice about this idea is that it can easily be fine-tuned. "If we tried it and there was some deleterious effect, we could switch it off, and within four or five days all evidence would have disappeared," Latham says. The Perils One worry is that although the tiny salt particles released by evaporating sea mist are perfect for marine stratocumulus-cloud formation, they are too small to create rain clouds. "You might make it harder for rain to form," Salter says. "Therefore, you would not want to do this upwind of a place where there is a bad drought." 6. Deflect Sunlight With A Mirror Feasibility: 1 Cost: $$$$ RISK: 5 One of the most ambitious schemes is a giant space "mirror" positioned between the Earth and sun to intercept sunlight. To build the mirror, physicist Lowell Wood, a senior staff scientist at Lawrence Livermore, proposes using a mesh of aluminum threads that are only a millionth of an inch in diameter and a thousandth of an inch apart. "It would be like a window screen made of exceedingly fine metal wire," he explains. The screen wouldn't actually block the light but would simply filter it so that some of the incoming infrared radiation wouldn't reach Earth's atmosphere. The Timeline Wood, who has been researching the mirror idea for more than a decade, says it should be considered only as a safety net if all other means of reversing global warming "fail or fall grossly short over the next few decades." The Promise Once in place, the mirror would cost almost nothing to operate. From Earth, it would look like a tiny black spot on the sun. "People really wouldn't see it," says Michael MacCracken. And plant photosynthesis isn't expected to be affected by the slight reduction in sunlight. The Perils Wood calculates that deflecting 1 percent of incoming solar radiation would stabilize the climate, but doing so would require a mirror spanning roughly 600,000 square miles?or several smaller ones. Putting something that size in orbit would be a massive challenge, not to mention exorbitantly expensive. From checker at panix.com Sat Jul 16 22:54:01 2005 From: checker at panix.com (Premise Checker) Date: Sat, 16 Jul 2005 18:54:01 -0400 (EDT) Subject: [Paleopsych] NYT: Harry Potter Works His Magic Again in a Far Darker Tale Message-ID: Harry Potter Works His Magic Again in a Far Darker Tale http://www.nytimes.com/2005/07/16/books/16choc.html [Several articles appended, with the pope weighing in in the last one. As with the previous two, we're getting our copy directly from England and not buying the dumbed-down translation into American, where crumpets are called English muffins, not the same thing at all! It took about a week to get the original. Why so long, I don't know. The discount was greater there, so even with postage for air delivery, it was actually cheaper. We got an e-mail at 10:00 am yesterday, presumably GMT, saying the book had been shipped, 14 hours before the official release. I have a picture of the author on the door of my office, on the grounds that she knows more about how to get kids to read than all 4,700 bureaucrats at the U.S. Department of Education. Everyone laughs when I make this claim. So far not a single soul has contested its truth. I suggested that we invite the author to come visit and share her thoughts. Not enough money in the budget, I was told, though it would have cost 0.01% of the annual budget at the most.] HARRY POTTER AND THE HALF-BLOOD PRINCE By J. K. Rowling. Illustrations by Mary GrandPr?. 632 pages. Arthur A. Levine/Scholastic. $29.99. By [3]MICHIKO KAKUTANI In an earlier Harry Potter novel, Sibyll Trelawney, divination teacher, looks at Harry and declares that her inner eye sees past his "brave face to the troubled soul within." "I regret to say that your worries are not baseless," she adds. "I see difficult times ahead for you, alas ... most difficult ... I fear the thing you dread will indeed come to pass ... and perhaps sooner than you think." In "Harry Potter and the Half-Blood Prince," that frightening prophecy does in fact come true - in a thoroughly harrowing denouement that sees the death of yet another important person in Harry's life, and that renders this, the sixth volume of the series, the darkest and most unsettling installment yet. It is a novel that pulls together dozens of plot strands from previous volumes, underscoring how cleverly and carefully J. K. Rowling has assembled this giant jigsaw puzzle of an epic. It is also a novel that depicts Harry Potter, now 16, as more alone than ever - all too well aware of loss and death, and increasingly isolated by his growing reputation as "the Chosen One," picked from among all others to do battle with the Dark Lord, Voldemort. As the novel opens, the wizarding world is at war: Lord Voldemort and his Death Eaters have grown so powerful that their evil deeds have spilled over into the Muggle world of nonmagic folks. The Muggles' prime minister has been alerted by the Ministry of Magic about the rise of Voldemort. And the terrible things that Ms. Rowling describes as being abroad in the green and pleasant land of England read like a grim echo of events in our own post-9/11, post-7/7 world and an uncanny reminder that the Hogwarts Express, which Harry and his friends all take to school, leaves from King's Cross station - the very station where the suspected London bombers gathered minutes before the explosions that rocked the city nine days ago. Harry, who as an infant miraculously survived a Voldemort attack that killed his mother and father, is regarded as "a symbol of hope" by many in the wizarding world, and as he learns more about the Dark Lord's obsession with his family, he realizes that he has a destiny he cannot escape. Like Luke Skywalker, he is eager to play the role of hero. But like Spider-Man, he is also aware of the burden that that role imposes: although he has developed romantic yearnings for a certain girl, he is wary of involvement, given his recognition of the dangers he will have to face. "It's been like ... like something out of someone else's life, these last few weeks with you," he tells her. "But I can't ... we can't ... I've got things to do alone now." Indeed, the perilous task Professor Dumbledore sets Harry in this volume will leave him with less and less time for Quidditch and hanging out with his pals Ron and Hermione: he is to help his beloved teacher find four missing Horcruxes - super-secret, magical objects in which Voldemort has secreted parts of his soul as a means of ensuring his immortality. Only when all of these items have been found and destroyed, Harry is told, can the Dark Lord finally be vanquished. There are a host of other unsettling developments in this novel, too: the Dementors, those fearsome creatures in charge of guarding Azkaban Prison, have joined forces with Voldemort; Draco Malfoy, Harry's sneering classmate who boasts of moving on to "bigger and better things," appears to vanish regularly from the school grounds; the sinister Severus Snape has been named the new teacher of defense against the dark arts; two Hogwarts students are nearly killed in mysterious attacks; and Dumbledore suddenly turns up with a badly injured hand, which he declines to explain. One of the few bright spots in Harry's school life appears to be an old textbook annotated by its enigmatic former owner, who goes by the name the Half-Blood Prince - a book that initially supplies Harry with some helpful tips for making potions. The early and middle sections of this novel meld the ordinary and the fantastic in the playful fashion Ms. Rowling has patented in her previous books, capturing adolescent angst about boy-girl and student-teacher relations with perfect pitch. Ron and Hermione, as well as Harry, all become involved in romantic flirtations with other students, even as they begin to realize that their O.W.L. (Ordinary Wizarding Level) grades may well determine the course of their post-Hogwarts future. As the story proceeds, however, it grows progressively more somber, eventually becoming positively Miltonian in its darkness. In fact, two of the novel's final scenes - like the violent showdown between Obi-Wan Kenobi and Anakin Skywalker in the last "Star Wars" movie, "Revenge of the Sith" - may well be too alarming for the youngest readers. Harry still has his wry sense of humor and a plucky boyish heart, but as in the last volume ([4]"Harry Potter and the Order of the Phoenix"), he is more Henry V than Prince Hal, more King Arthur than the young Wart. He has emerged, at school and on the Quidditch field, as an unquestioned leader: someone who must learn to make unpopular decisions and control his impetuous temper, someone who must keep certain secrets from his schoolmates and teachers. He has become more aware than ever of what he and Voldemort have in common - from orphaned childhoods to an ability to talk Parseltongue (i.e., snake speech) to the possession of matching wands - and in one chilling scene, he is forced to choose between duty to his mission and his most heartfelt emotions. In discovering the true identity of the Half-Blood Prince, Harry will learn to re-evaluate the value of first impressions and the possibility that his elders' convictions can blind them to parlous truths. And in embracing his own identity, he will discover his place in history. As in earlier volumes, Ms. Rowling moves Harry's story forward by chronicling his adventures at Hogwarts, while simultaneously moving backward in time through the use of flashbacks (via Dumbledore's remarkable Pensieve, a receptacle for people's memories). As a result, this is a coming-of-age story that chronicles the hero's evolution not only by showing his maturation through a series of grueling tests, but also by detailing the growing emotional wisdom he gains from understanding more and more about the past. In addition to being a bildungsroman, of course, the Harry Potter books are also detective stories, quest narratives, moral fables, boarding school tales and action-adventure thrill rides, and Ms. Rowling uses her tireless gift for invention to thread these genres together, while at the same time taking myriad references and tropes (borrowed from such disparate sources as Shakespeare, Dickens, fairy tales, Greek myths and more recent works like "Star Wars") and making them her own. Perhaps because of its position as the penultimate installment of a seven-book series, "The Half-Blood Prince" suffers, at moments, from an excess of exposition. Some of Dumbledore's speeches to Harry have a forced, summing-up quality, and the reader can occasionally feel Ms. Rowling methodically setting the stage for developments to come or fleshing out scenarios put in play by earlier volumes (most notably, [5]"Harry Potter and the Chamber of Secrets," with its revelations about the young Voldemort, a k a Tom Riddle). Such passages, however, are easily forgotten, as the plot hurtles along, gaining a terrible momentum in this volume's closing pages. At the same time, the suspense generated by these books does not stem solely from the tension of wondering who will die next or how one or another mystery will be solved. It stems, as well, from Ms. Rowling's dexterity in creating a character-driven tale, a story in which a person's choices determine the map of his or her life - a story that creates a hunger to know more about these people who have become so palpably real. We want to know more about Harry's parents - how they met and married and died - because that may tell us more about Harry's own yearnings and decisions. We want to know more about Dumbledore's desire to believe the best of everyone because that may shed light on whom he chooses to trust. We want to know more about the circumstances of Tom Riddle's birth because that may shed light on his decision to reinvent himself as Lord Voldemort. Indeed, the achievement of the Potter books is the same as that of the great classics of children's literature, from the Oz novels to "The Lord of the Rings": the creation of a richly imagined and utterly singular world, as detailed, as improbable and as mortal as our own. -------------------- New 'Harry Potter' Packs a Punch http://www.nytimes.com/aponline/arts/AP-Harry-Potter-Review.html By THE ASSOCIATED PRESS Filed at 12:20 a.m. ET NEW YORK (AP) -- A word of caution to all those hard-core fans about to dive into the latest adventures of Harry Potter: There will be tears. Yours. It's odd to think of the next-to-last Potter book as being a turning point but, in so many ways, that's the truth about ''Harry Potter and the Half-Blood Prince.'' J.K. Rowling's hero is no longer a boy wizard; he's a young man, determined to seek out and face a young man's challenges. Veteran Potter readers shouldn't worry about outgrowing the series, but younger fans may find that it has grown up too much. All ages, however, should be assured: Rowling's latest has lost none of the charm, intelligence and hilarity that have catapulted her series into publishing history. But this book also has a poignancy, complexity and sadness we probably couldn't have imagined when we started reading the first one. There's an emotional punch you won't believe. When Book 5, ''Harry Potter and the Order of the Phoenix,'' ended, we learned what made the evil wizard Lord Voldemort kill Harry's parents and try to kill him as an infant -- a prophecy that said Harry could vanquish him, and that one of them would have to kill the other in order to survive. The wizarding world Rowling returns us to in Book 6 is a scarier place, even though only a few weeks have passed. Thanks to Harry and friends, the entire magical community knows that Voldemort is back. And how. He and his Death Eater followers have unleashed so much violence and murder that even the head of the non-magical world has to be told about it. Sooner than in her previous books, the action shifts Harry away from his awful relatives, the Dursleys, and right back to friends Ron and Hermione and all the others at Hogwarts School for Witchcraft and Wizardry. As usual, there are new faces -- the new teacher of Defense against the Dark Arts gives Harry some serious concerns. As sixth-year students, Harry and company have moved on from the magical basics into more complicated studies, preparing for their after-school careers. But it's not just the work that's gotten more complicated, it's everything. Friendships change, love arrives (this, thank goodness, should FINALLY end all those Internet fan site arguments about who is going to hook up with whom) and Harry learns a lot about his enemy -- about his past, and about a potential weakness. We also learn about Harry, about the man he is turning into, his character and his strength of will. But it wouldn't be Hogwarts without strangeness and mystery. Harry has his suspicions about who's trying to do what, and it all erupts in the end. Rowling shows off her mastery, leading us down a path with certain clues and still managing to blindside us about who turns on whom and who doesn't. And, yes, there is another MAJOR death. Seriously major. Break out the tissues. No matter how well you think you know these books, don't assume you really know who anyone is, or what they are and aren't capable of. This is a powerful, unforgettable setup for the finale. The hardest thing about ''Half-Blood Prince'' is where it leaves us -- in mourning for who has been lost, anxious to learn how Rowling will wrap up a saga that millions wish would go on and on. On the Net: [3]http://www.scholastic.com [4]http://www.jkrowling.com ------------------------ Harry Potter: A Dissent (2 Letters) http://www.nytimes.com/2005/07/16/opinion/l16potter.html To the Editor: With the latest Harry Potter release ("Harry Potter and the Half-Crazed Summer Camper," Arts pages, July 14), is there room for a dissenting voice? My 10-year-old son announced his intention never to read another Harry Potter book. "Because, Mom, haven't you noticed? It's the same old thing. Harry Potter falls in trouble, Harry Potter learns a spell. It gets so boring." Could I believe my ears? My son, a good reader, at last! And I recalled (silently) a favorite quote from Vladimir Nabokov: "Caress the details," he directed. "Read for the tingle, the shiver up the spine." When my son deposited his hardcover Potter collection in his school's donation box, he assured me: "I don't want to keep these. They're not the kind of books you read twice." Well, I asked, what kind of book do you read again? "One with details," he answered. Sorry, J. K. Rowling... Kate Roth New York, July 14, 2005 To the Editor: The editor who passed on Harry Potter ("The Editor's Tale," by John Kenney, Op-Ed, July 14) missed his true calling in life: he could obviously have a bright future as a country music lyricist. I presume his dog also left him? Bryan F. Erickson Eagan, Minn., July 14, 2005 -------------- New Harry Potter Book Getting Rave Reviews http://www.nytimes.com/aponline/arts/AP-Harry-Potter.html?pagewanted=print By THE ASSOCIATED PRESS Filed at 5:40 p.m. ET NEW YORK (AP) -- After all the hype and midnight madness, ''Harry Potter and the Half-Blood Prince'' is proving as much an event to read as to buy. Critics are calling it the most moving and mature of J.K. Rowling's fantasy series. The New York Times compared it favorably to ''The Lord of the Rings,'' and the Los Angeles Times to ''Charlotte's Web.'' The AP's Deepti Hajela called it a ''powerful, unforgettable setup for the finale,'' Book VII, when the great Potter ride is expected to end. ''It'll be very sad when she finishes writing the books,'' said Agnes Jang, 16, a resident of Sydney, Australia, who came out Friday night dressed head-to-toe as a student of the Hogwarts School of Witchcraft and Wizardry. ''But Harry has to move on.'' With a major character dying, tears may well break out around the globe over the next few days, but the age of Potter VI dawned at midnight Saturday with millions of smiles and a bit of a wink from Rowling. In Edinburgh, Scotland, the author emerged from behind a secret panel inside the city's medieval castle, settled into a leather easy chair and read an excerpt from the sixth chapter to a super-select group of 70 children from around the world. ''You get a lot of answers in this book,'' Rowling, a resident of Edinburgh, said as she arrived at the castle before thousands of adoring fans. ''I can't wait for everyone to read it.'' It was party time for Potter lovers. Elisabeth Grant-Gibson, co-owner of ''Windows a Bookshop'' in Monroe, La., said they did more than double a normal good day's business. The first copy went to 10-year-old Chloe Kaczvinsky, whose parents drove 30 miles to attend the store's Harry Potter Pajama Party. Her mother read the first chapter aloud during the ride home, and the second at home. Then her parents went to sleep. ''I asked Mom if I could read the book in bed. I stayed up to 5 and woke up at 8,'' Chloe said. In London, events were muted by the July 7 subway and bus bombings, which killed some 50 people. Book and magazine chain WH Smith scrapped a planned midnight launch at King's Cross Station, from whose fictional Platform 9 3/4 Harry catches the train to Hogwarts at the start of each term. The deadliest of the day's four attacks was on a subway near King's Cross. Still, hundreds of thousands of fans turned out to purchase Potter. In Dallas, about 200 of the faithful waited in the dark, mingling in an unlit parking lot, after storms knocked out power at a Barnes & Noble store. White horses posing as unicorns paraded down the main street of Wilmington, Ohio, where Books 'N' More quickly sold hundreds of Potters. Since Rowling first introduced Harry and his fellow students at Hogwarts to the world in 1997, the books have become a global phenomenon, selling 270 million copies in 62 languages and inspiring a series of movies. Rowling is now the richest woman in Britain, with a fortune estimated by Forbes magazine at $1 billion. With only brief interruptions, ''Half-Blood Prince'' has topped the charts of Amazon.com and Barnes & Noble.com since last December, when Rowling announced that she had completed it. Pre-orders worldwide were in the millions and even the audio book has been keeping pace with such blockbusters as Dan Brown's ''The Da Vinci Code'' and David McCullough's ''1776.'' The biggest glitch happened in Canada, where publisher Raincoast sought a court injunction after a Vancouver store accidentally sold 14 copies last week. A judge ordered customers not to discuss the book, copy it, sell it or read it before its release. The biggest gripes came in the U.S., not from critics, but from booksellers and environmentalists. Independent retailers were upset with Scholastic for selling the book on its web site at a 20 percent discount, more than many stores can afford. Environmentalists, meanwhile, were unhappy that Scholastic, unlike Raincoast, doesn't print the books on 100 percent recycled paper. ''We have some magic up our sleeves too,'' reads a message posted on the Web site of Greenpeace, ''a link to the Canadian publisher of `Harry Potter and the Half Blood Prince,' who can send you a tree-friendly version of this popular book.'' ------ AP reporters Jill Lawless, Cassandra Vinograd and Sarah Blaskovich in London; Catherine McAloon in Edinburgh, Scotland; and Meraiah Foley in Sydney, Australia contributed to this report. On the Net: [3]http://www.bloomsburymagazine.com [4]http://www.scholastic.com [5]http://www.harrypotter.com [6]http://www.jkrowling.com [7]http://www.greenpeace.org 4. http://www.scholastic.com/ 5. http://www.harrypotter.com/ -------------- Review: New 'Harry Potter' Packs a Punch http://www.nytimes.com/aponline/arts/AP-Harry-Potter-Review.html?pagewanted=print By THE ASSOCIATED PRESS Filed at 12:20 a.m. ET NEW YORK (AP) -- A word of caution to all those hard-core fans about to dive into the latest adventures of Harry Potter: There will be tears. Yours. It's odd to think of the next-to-last Potter book as being a turning point but, in so many ways, that's the truth about ''Harry Potter and the Half-Blood Prince.'' J.K. Rowling's hero is no longer a boy wizard; he's a young man, determined to seek out and face a young man's challenges. Veteran Potter readers shouldn't worry about outgrowing the series, but younger fans may find that it has grown up too much. All ages, however, should be assured: Rowling's latest has lost none of the charm, intelligence and hilarity that have catapulted her series into publishing history. But this book also has a poignancy, complexity and sadness we probably couldn't have imagined when we started reading the first one. There's an emotional punch you won't believe. When Book 5, ''Harry Potter and the Order of the Phoenix,'' ended, we learned what made the evil wizard Lord Voldemort kill Harry's parents and try to kill him as an infant -- a prophecy that said Harry could vanquish him, and that one of them would have to kill the other in order to survive. The wizarding world Rowling returns us to in Book 6 is a scarier place, even though only a few weeks have passed. Thanks to Harry and friends, the entire magical community knows that Voldemort is back. And how. He and his Death Eater followers have unleashed so much violence and murder that even the head of the non-magical world has to be told about it. Sooner than in her previous books, the action shifts Harry away from his awful relatives, the Dursleys, and right back to friends Ron and Hermione and all the others at Hogwarts School for Witchcraft and Wizardry. As usual, there are new faces -- the new teacher of Defense against the Dark Arts gives Harry some serious concerns. As sixth-year students, Harry and company have moved on from the magical basics into more complicated studies, preparing for their after-school careers. But it's not just the work that's gotten more complicated, it's everything. Friendships change, love arrives (this, thank goodness, should FINALLY end all those Internet fan site arguments about who is going to hook up with whom) and Harry learns a lot about his enemy -- about his past, and about a potential weakness. We also learn about Harry, about the man he is turning into, his character and his strength of will. But it wouldn't be Hogwarts without strangeness and mystery. Harry has his suspicions about who's trying to do what, and it all erupts in the end. Rowling shows off her mastery, leading us down a path with certain clues and still managing to blindside us about who turns on whom and who doesn't. And, yes, there is another MAJOR death. Seriously major. Break out the tissues. No matter how well you think you know these books, don't assume you really know who anyone is, or what they are and aren't capable of. This is a powerful, unforgettable setup for the finale. The hardest thing about ''Half-Blood Prince'' is where it leaves us -- in mourning for who has been lost, anxious to learn how Rowling will wrap up a saga that millions wish would go on and on. ------------------ Growing Up With Harry Potter http://www.nytimes.com/aponline/arts/AP-Growing-up-With-Harry.html By THE ASSOCIATED PRESS Filed at 2:28 a.m. ET RALEIGH, N.C. (AP) -- I was beginning to worry that my 12-year-old daughter might have outgrown Harry Potter -- or at least the excitement of Harry. When ''The Order of the Phoenix'' came out two years ago, Miana insisted we pre-order three months in advance. She begged my wife, Linda, to sew her a black Hogwarts robe, and we spent hours whittling her a wand -- with a whisker from our cat, Bear, as its ''magical core.'' We spent three hours at Borders playing games, doing face-painting and waiting in line to be one of the first with a book. But as Saturday's midnight release of ''The Half-Blood Prince'' approached, Miana, now a rising seventh grader, wasn't even sure she wanted to deal with all that. Two years and 26 days was a long time to be away from Hogwarts School of Witchcraft and Wizardry, so we'd had to find other ways to feed our fantasy. Miana started with Eoin Colfer's ''Artemis Fowl'' series, with its fairies, goblins and pixies. Then we introduced her to J.R.R. Tolkien's ''Lord of the Rings'' trilogy -- an epic tale of good and bad wizards, of dragons and trolls and goblins and elves, and of a resurgent ''dark lord.'' It was the perfect parallel to Potter, and Miana ate it up. For months, she could talk of nothing but enchanted rings, magic swords and elvish runes, and she seemed to have little time for Harry and his friends. Her Hogwarts robe sat balled up in a corner of her closet, wrinkled and forgotten. Then, about three weeks ago, she asked if I could call Borders and see what they had planned for this year. When I told her the clerk had promised ''crazy, crazy fun,'' she asked if we could sign up. Miana began marking the passage of time in weeks or days ''TH'' -- 'til Harry. On Thursday, Miana called me at work to finalize our plans, which by now included her two best friends, Katie and Amanda. ''I hope I can sleep tonight,'' she said. ''Maybe I should take a Benadryl.'' She did. But, in our defense, she DID have a stuffy nose. Friday morning, I jumped in the car and drove to Borders to be there when they opened at 9 o'clock and get a low number for the line to pick up our book at midnight. When I returned home, Miana and Linda were waiting at the door. ''What number did we get?'' Miana asked. ''They said we'd have to come back later,'' I said. ''But I have a couple of surprises to tide you over.'' I made a big show of pulling out a box of Bertie Bott's Every Flavor beans (with new flavors rotten egg and bacon), a chocolate frog complete with wizard trading card and ... a purple ticket with the number ''0001.'' Miana danced around the kitchen chanting, ''We're No. 1! We're No. 1!'' She and her friends exchanged phone calls throughout the day to confer on wardrobes and hairstyles. Katie, with her beautiful red hair and a black graduation robe from Goodwill, would be Ginny Weasley. Amanda, with a purple robe and her hair dyed raven-black, would be Harry's crush, Cho Chang. Linda had spent the evening before braiding Miana's hair to achieve that Hermione Granger frizziness. ''I'm hyper, I'm hyper,'' Miana said, bouncing up and down. ''Today can't go by fast enough!'' But when we got to the store around 9:30 p.m., the girls were already wondering whether they'd made a mistake dressing up. So few seemed to be in costume this year. As the minutes ticked by toward midnight, the three sat in a corner of the bookstore, eating oversized chocolate chip cookies and thumbing through a stack of J-14 and Tiger Beat magazines for the latest gossip on Orlando Bloom and Lindsay Lohan. Their minds seemed to be on anything but a boy with a lightning scar on his forehead. ''Am I having fun yet?'' Miana asked with that look of ennui that only a tween can muster. ''Because if I am, my face hasn't caught up with my brain.'' Maybe we just should've gone to Wal-Mart at midnight and dispensed with all the hoopla. Twelve suddenly seemed much too old for face-painting and hat-making. Then the store manager announced it was time to line up. ''Well,'' Miana said, giving me a shove. ''Get in line, buddy.'' Soon she had grabbed my cell phone and was counting down the minutes. At 11:55, Linda turned to me with a sad expression on her face. ''I'm already dreading the day when we finish the book,'' she said. ''Because then it will be over again.'' ''Are you going to cry?'' Miana asked with a look that said, ''You'd better not.'' When the manager announced at 11:58 that the books were being brought up to the registers, Miana was up on her tiptoes dancing a jig. ''It's 12 o'clock, it's 12 o'clock, it's 12 o'clock,'' she said, handing me back the phone. ''Get your ticket.'' A couple of minutes later, Miana was leaving the store, the new book clutched tightly to her chest. As we drove home under a brilliant yellow half moon, we listened to the book on compact disc. The three girls who just a few hours earlier couldn't have cared less about witches and warlocks sat transfixed in the back seat, not uttering a word. ------ EDITOR'S NOTE: Allen G. Breed is the AP's Southeast regional writer, based in Raleigh. -------------- Rowling Promises Answers in Potter Book http://www.nytimes.com/aponline/arts/AP-Harry-Potter-Castle.html July 15, 2005 By THE ASSOCIATED PRESS Filed at 10:37 p.m. ET EDINBURGH, Scotland (AP) -- As midnight drew near, Harry Potter's creator arrived at the glowering medieval castle, prepared to crack open the next adventure the world has been waiting for. J.K. Rowling refused to give away plot details from ''Harry Potter and the Half-Blood Prince'' as she walked the red carpet Friday outside Edinburgh castle, where thousands of fans eagerly waited to watch her read on large video screens. ''You get a lot of answers in this book,'' Rowling said. ''I can't wait for everyone to read it.'' Seventy lucky young fans from around the world were spirited in carriages up cobbled streets into the 11th-century fortress, which was illuminated with neon lights and blazing torches. The carriages were drawn by black and white horses adorned with ostrich plumes, and driven by coachmen wearing capes and black top hats. Inside, lantern-bearing prefects led the 70 to the Queen Anne building, transformed for the evening into the entrance hall of Hogwarts School of Witchcraft and Wizardry. When the clock struck 12, Rowling emerged from behind a secret panel, settled into a leather easy chair and read an excerpt from the sixth chapter to the spellbound group. After the brief reading, the children erupted in screams and applause. The 70 won competitions to report on the book launch for their local newspapers. The fans outside were drawn from schools here in the Scottish capital, where the 39-year-old Rowling lives. Along with her best friend, Minnie Mass, an 18 year old from Miami, Florida, found her way to the launch party through hard work. They weren't competition winners, but were allowed inside the castle after lining up for several hours and stopping Rowling for a chat as she made her way down the red carpet. ''We are waitresses in Miami Beach. We worked for a year to get over here. We came last night and we are leaving tomorrow,'' said Mass, who is originally from Parana, Brazil. She said Harry Potter was the first book she read in English after learning the language in the United States: ''I told (Rowling) it was the first book I read in English and she was really touched.'' Chelsea Kennedy, 16, from Toronto, Canada, had only one criticism of the extravaganza. ''I wished it had been longer,'' she said. --------------------- Potter Fans, it's worth it to be harried. Newsday, July 15, 2005 http://www.newsday.com/news/columnists/ny-nyhen154344021jul15,0,5037324,print.column?coll=ny-news-columnists Hours ahead of schedule, I can now reveal the shocking contents of "Harry Potter and the Half-Blood Prince." Get ready now, you Muggles (that's Potter-talk for people without magical powers). If you aren't prepared to hear the latest adventures of the world's most beloved boy wizard with the lightning-bolt scar - well, you'd probably better stop reading right here. Wait! Only kidding! I don't really have the new Potter book, which is set for formal release right after midnight. I just thought it might be fun to watch those ghoulish enforcers at the Scholastic book division go totally bonkers when they heard that someone had blown their Embargo of Doom. I'll bet those jittery publishing flacks are pressing the panic buttons right now, hyperventilating into their cell phones, interrogating innocent bookstore clerks, sending high-priced legal teams off in search of court injunctions - all to protect their precious 12:01 a.m. release. Ha! I got you this time - your own personal Lord Voldemort, casting evil spells on Harry's publishing-industry control freaks! Now, I agree the Potter books are great. I love how the J.K. Rowling series has created such excitement for reading among the young. Pretty much anything that gets the kids reading, I am fully in favor of. It could be recipes or mattress tags or cereal-box ingredient lists, for all I care. God knows we need something to yank the kiddies away the video games and the cable TV. If it's the Hogwarts School of Witchcraft and Wizardry, I'll sing the fight song right now. And these are real books with complex characters and rollicking plots. We've truly come a long way from "The Hardy Boys," the formulaic drivel that I was reading when I was Potter-age. With 260 million sold, there's no denying the power of the Harry brand. The last one, "Harry Potter and the Order of the Phoenix," was the fastest seller in U.S. publishing history, 5 million copies in the first 24 hours. The new one has gotten the grandest launch ever, 10.8 million up front. All of which is very good reason for the Potter publishing zealots, and thoughtless Potter critics, to just chill out. The critics first: Last time around, we had some fundamentalist Christian ministers throwing Harry Potter book-burnings. They complained that the boy wizard was teaching devil worship to America's innocent young. This was so absurd, it's hardly worth answering now, and thankfully we haven't heard much this time from the holy-roller-hate-Harry crowd. Unfortunately, they've been replaced by the new pope. Even before the new book appears, Pope Benedict XVI has already filed his series review. The Harry Potter books, he wrote, "distort Christianity in the soul." The comments were made in a 2003 letter to German author (and previous Potter-basher) Gabriele Kuby, back when Benedict was still a cardinal. "It is good that you enlighten people about Harry Potter because these are subtle seductions which act unnoticed and by this deeply distort Christianity in the soul before it can grow properly," he wrote from the Vatican, apparently forgetting that children for centuries have been delighted (not harmed) by fantasy and imagination. But just as this kind of stuff is generating a broad Harry defense, his iron-fisted publishers risk soiling all the good feeling, just as the new book comes out. Scholastic reps have been harrumphing about a handful of Potter leaks. On Monday, a 9-year-old boy bought a copy from an Eckerd's drugstore in Kingston, N.Y. Conscience-stricken, he returned the book after reading just two pages. And two Indianapolis men snagged a copy from a downtown bookstore. "I thought I was seeing things," Tim Meyer told the Indianapolis Star. "I asked the lady, 'Can I buy this now?' and she was like 'Yeah' and just rang it up." Meyer's already on Chapter 18, he said, pronouncing the book a fast read. "What J.K. Rowling is telling you is pretty shocking, considering the last five books." So what can you say? You can say it's even worse in Canada. The Canadian distributor, Raincoast Books, was so upset when a store outside Vancouver mistakenly sold 20 copies, the distributor got a gag order from the British Columbia Supreme Court, forbidding buyers from revealing the plot. No American courts have issued pre-publication prior-restraint orders yet in defense of the big-dollar PR campaign. But as I mentioned, we - and Harry - still have a few hours to go. From checker at panix.com Sun Jul 17 00:08:56 2005 From: checker at panix.com (Premise Checker) Date: Sat, 16 Jul 2005 20:08:56 -0400 (EDT) Subject: [Paleopsych] NYT: Harry Potter Works His Magic Again in a Far Darker Tale Message-ID: Harry Potter Works His Magic Again in a Far Darker Tale http://www.nytimes.com/2005/07/16/books/16choc.html [Several articles appended, with the pope weighing in in the last one. As with the previous two, we're getting our copy directly from England and not buying the dumbed-down translation into American, where crumpets are called English muffins, not the same thing at all! It took about a week to get the original. Why so long, I don't know. The discount was greater there, so even with postage for air delivery, it was actually cheaper. We got an e-mail at 10:00 am yesterday, presumably GMT, saying the book had been shipped, 14 hours before the official release. I have a picture of the author on the door of my office, on the grounds that she knows more about how to get kids to read than all 4,700 bureaucrats at the U.S. Department of Education. Everyone laughs when I make this claim. So far not a single soul has contested its truth. I suggested that we invite the author to come visit and share her thoughts. Not enough money in the budget, I was told, though it would have cost 0.01% of the annual budget at the most.] HARRY POTTER AND THE HALF-BLOOD PRINCE By J. K. Rowling. Illustrations by Mary GrandPr?. 632 pages. Arthur A. Levine/Scholastic. $29.99. By [3]MICHIKO KAKUTANI In an earlier Harry Potter novel, Sibyll Trelawney, divination teacher, looks at Harry and declares that her inner eye sees past his "brave face to the troubled soul within." "I regret to say that your worries are not baseless," she adds. "I see difficult times ahead for you, alas ... most difficult ... I fear the thing you dread will indeed come to pass ... and perhaps sooner than you think." In "Harry Potter and the Half-Blood Prince," that frightening prophecy does in fact come true - in a thoroughly harrowing denouement that sees the death of yet another important person in Harry's life, and that renders this, the sixth volume of the series, the darkest and most unsettling installment yet. It is a novel that pulls together dozens of plot strands from previous volumes, underscoring how cleverly and carefully J. K. Rowling has assembled this giant jigsaw puzzle of an epic. It is also a novel that depicts Harry Potter, now 16, as more alone than ever - all too well aware of loss and death, and increasingly isolated by his growing reputation as "the Chosen One," picked from among all others to do battle with the Dark Lord, Voldemort. As the novel opens, the wizarding world is at war: Lord Voldemort and his Death Eaters have grown so powerful that their evil deeds have spilled over into the Muggle world of nonmagic folks. The Muggles' prime minister has been alerted by the Ministry of Magic about the rise of Voldemort. And the terrible things that Ms. Rowling describes as being abroad in the green and pleasant land of England read like a grim echo of events in our own post-9/11, post-7/7 world and an uncanny reminder that the Hogwarts Express, which Harry and his friends all take to school, leaves from King's Cross station - the very station where the suspected London bombers gathered minutes before the explosions that rocked the city nine days ago. Harry, who as an infant miraculously survived a Voldemort attack that killed his mother and father, is regarded as "a symbol of hope" by many in the wizarding world, and as he learns more about the Dark Lord's obsession with his family, he realizes that he has a destiny he cannot escape. Like Luke Skywalker, he is eager to play the role of hero. But like Spider-Man, he is also aware of the burden that that role imposes: although he has developed romantic yearnings for a certain girl, he is wary of involvement, given his recognition of the dangers he will have to face. "It's been like ... like something out of someone else's life, these last few weeks with you," he tells her. "But I can't ... we can't ... I've got things to do alone now." Indeed, the perilous task Professor Dumbledore sets Harry in this volume will leave him with less and less time for Quidditch and hanging out with his pals Ron and Hermione: he is to help his beloved teacher find four missing Horcruxes - super-secret, magical objects in which Voldemort has secreted parts of his soul as a means of ensuring his immortality. Only when all of these items have been found and destroyed, Harry is told, can the Dark Lord finally be vanquished. There are a host of other unsettling developments in this novel, too: the Dementors, those fearsome creatures in charge of guarding Azkaban Prison, have joined forces with Voldemort; Draco Malfoy, Harry's sneering classmate who boasts of moving on to "bigger and better things," appears to vanish regularly from the school grounds; the sinister Severus Snape has been named the new teacher of defense against the dark arts; two Hogwarts students are nearly killed in mysterious attacks; and Dumbledore suddenly turns up with a badly injured hand, which he declines to explain. One of the few bright spots in Harry's school life appears to be an old textbook annotated by its enigmatic former owner, who goes by the name the Half-Blood Prince - a book that initially supplies Harry with some helpful tips for making potions. The early and middle sections of this novel meld the ordinary and the fantastic in the playful fashion Ms. Rowling has patented in her previous books, capturing adolescent angst about boy-girl and student-teacher relations with perfect pitch. Ron and Hermione, as well as Harry, all become involved in romantic flirtations with other students, even as they begin to realize that their O.W.L. (Ordinary Wizarding Level) grades may well determine the course of their post-Hogwarts future. As the story proceeds, however, it grows progressively more somber, eventually becoming positively Miltonian in its darkness. In fact, two of the novel's final scenes - like the violent showdown between Obi-Wan Kenobi and Anakin Skywalker in the last "Star Wars" movie, "Revenge of the Sith" - may well be too alarming for the youngest readers. Harry still has his wry sense of humor and a plucky boyish heart, but as in the last volume ([4]"Harry Potter and the Order of the Phoenix"), he is more Henry V than Prince Hal, more King Arthur than the young Wart. He has emerged, at school and on the Quidditch field, as an unquestioned leader: someone who must learn to make unpopular decisions and control his impetuous temper, someone who must keep certain secrets from his schoolmates and teachers. He has become more aware than ever of what he and Voldemort have in common - from orphaned childhoods to an ability to talk Parseltongue (i.e., snake speech) to the possession of matching wands - and in one chilling scene, he is forced to choose between duty to his mission and his most heartfelt emotions. In discovering the true identity of the Half-Blood Prince, Harry will learn to re-evaluate the value of first impressions and the possibility that his elders' convictions can blind them to parlous truths. And in embracing his own identity, he will discover his place in history. As in earlier volumes, Ms. Rowling moves Harry's story forward by chronicling his adventures at Hogwarts, while simultaneously moving backward in time through the use of flashbacks (via Dumbledore's remarkable Pensieve, a receptacle for people's memories). As a result, this is a coming-of-age story that chronicles the hero's evolution not only by showing his maturation through a series of grueling tests, but also by detailing the growing emotional wisdom he gains from understanding more and more about the past. In addition to being a bildungsroman, of course, the Harry Potter books are also detective stories, quest narratives, moral fables, boarding school tales and action-adventure thrill rides, and Ms. Rowling uses her tireless gift for invention to thread these genres together, while at the same time taking myriad references and tropes (borrowed from such disparate sources as Shakespeare, Dickens, fairy tales, Greek myths and more recent works like "Star Wars") and making them her own. Perhaps because of its position as the penultimate installment of a seven-book series, "The Half-Blood Prince" suffers, at moments, from an excess of exposition. Some of Dumbledore's speeches to Harry have a forced, summing-up quality, and the reader can occasionally feel Ms. Rowling methodically setting the stage for developments to come or fleshing out scenarios put in play by earlier volumes (most notably, [5]"Harry Potter and the Chamber of Secrets," with its revelations about the young Voldemort, a k a Tom Riddle). Such passages, however, are easily forgotten, as the plot hurtles along, gaining a terrible momentum in this volume's closing pages. At the same time, the suspense generated by these books does not stem solely from the tension of wondering who will die next or how one or another mystery will be solved. It stems, as well, from Ms. Rowling's dexterity in creating a character-driven tale, a story in which a person's choices determine the map of his or her life - a story that creates a hunger to know more about these people who have become so palpably real. We want to know more about Harry's parents - how they met and married and died - because that may tell us more about Harry's own yearnings and decisions. We want to know more about Dumbledore's desire to believe the best of everyone because that may shed light on whom he chooses to trust. We want to know more about the circumstances of Tom Riddle's birth because that may shed light on his decision to reinvent himself as Lord Voldemort. Indeed, the achievement of the Potter books is the same as that of the great classics of children's literature, from the Oz novels to "The Lord of the Rings": the creation of a richly imagined and utterly singular world, as detailed, as improbable and as mortal as our own. -------------------- New 'Harry Potter' Packs a Punch http://www.nytimes.com/aponline/arts/AP-Harry-Potter-Review.html By THE ASSOCIATED PRESS Filed at 12:20 a.m. ET NEW YORK (AP) -- A word of caution to all those hard-core fans about to dive into the latest adventures of Harry Potter: There will be tears. Yours. It's odd to think of the next-to-last Potter book as being a turning point but, in so many ways, that's the truth about ''Harry Potter and the Half-Blood Prince.'' J.K. Rowling's hero is no longer a boy wizard; he's a young man, determined to seek out and face a young man's challenges. Veteran Potter readers shouldn't worry about outgrowing the series, but younger fans may find that it has grown up too much. All ages, however, should be assured: Rowling's latest has lost none of the charm, intelligence and hilarity that have catapulted her series into publishing history. But this book also has a poignancy, complexity and sadness we probably couldn't have imagined when we started reading the first one. There's an emotional punch you won't believe. When Book 5, ''Harry Potter and the Order of the Phoenix,'' ended, we learned what made the evil wizard Lord Voldemort kill Harry's parents and try to kill him as an infant -- a prophecy that said Harry could vanquish him, and that one of them would have to kill the other in order to survive. The wizarding world Rowling returns us to in Book 6 is a scarier place, even though only a few weeks have passed. Thanks to Harry and friends, the entire magical community knows that Voldemort is back. And how. He and his Death Eater followers have unleashed so much violence and murder that even the head of the non-magical world has to be told about it. Sooner than in her previous books, the action shifts Harry away from his awful relatives, the Dursleys, and right back to friends Ron and Hermione and all the others at Hogwarts School for Witchcraft and Wizardry. As usual, there are new faces -- the new teacher of Defense against the Dark Arts gives Harry some serious concerns. As sixth-year students, Harry and company have moved on from the magical basics into more complicated studies, preparing for their after-school careers. But it's not just the work that's gotten more complicated, it's everything. Friendships change, love arrives (this, thank goodness, should FINALLY end all those Internet fan site arguments about who is going to hook up with whom) and Harry learns a lot about his enemy -- about his past, and about a potential weakness. We also learn about Harry, about the man he is turning into, his character and his strength of will. But it wouldn't be Hogwarts without strangeness and mystery. Harry has his suspicions about who's trying to do what, and it all erupts in the end. Rowling shows off her mastery, leading us down a path with certain clues and still managing to blindside us about who turns on whom and who doesn't. And, yes, there is another MAJOR death. Seriously major. Break out the tissues. No matter how well you think you know these books, don't assume you really know who anyone is, or what they are and aren't capable of. This is a powerful, unforgettable setup for the finale. The hardest thing about ''Half-Blood Prince'' is where it leaves us -- in mourning for who has been lost, anxious to learn how Rowling will wrap up a saga that millions wish would go on and on. On the Net: [3]http://www.scholastic.com [4]http://www.jkrowling.com ------------------------ Harry Potter: A Dissent (2 Letters) http://www.nytimes.com/2005/07/16/opinion/l16potter.html To the Editor: With the latest Harry Potter release ("Harry Potter and the Half-Crazed Summer Camper," Arts pages, July 14), is there room for a dissenting voice? My 10-year-old son announced his intention never to read another Harry Potter book. "Because, Mom, haven't you noticed? It's the same old thing. Harry Potter falls in trouble, Harry Potter learns a spell. It gets so boring." Could I believe my ears? My son, a good reader, at last! And I recalled (silently) a favorite quote from Vladimir Nabokov: "Caress the details," he directed. "Read for the tingle, the shiver up the spine." When my son deposited his hardcover Potter collection in his school's donation box, he assured me: "I don't want to keep these. They're not the kind of books you read twice." Well, I asked, what kind of book do you read again? "One with details," he answered. Sorry, J. K. Rowling... Kate Roth New York, July 14, 2005 To the Editor: The editor who passed on Harry Potter ("The Editor's Tale," by John Kenney, Op-Ed, July 14) missed his true calling in life: he could obviously have a bright future as a country music lyricist. I presume his dog also left him? Bryan F. Erickson Eagan, Minn., July 14, 2005 -------------- New Harry Potter Book Getting Rave Reviews http://www.nytimes.com/aponline/arts/AP-Harry-Potter.html?pagewanted=print By THE ASSOCIATED PRESS Filed at 5:40 p.m. ET NEW YORK (AP) -- After all the hype and midnight madness, ''Harry Potter and the Half-Blood Prince'' is proving as much an event to read as to buy. Critics are calling it the most moving and mature of J.K. Rowling's fantasy series. The New York Times compared it favorably to ''The Lord of the Rings,'' and the Los Angeles Times to ''Charlotte's Web.'' The AP's Deepti Hajela called it a ''powerful, unforgettable setup for the finale,'' Book VII, when the great Potter ride is expected to end. ''It'll be very sad when she finishes writing the books,'' said Agnes Jang, 16, a resident of Sydney, Australia, who came out Friday night dressed head-to-toe as a student of the Hogwarts School of Witchcraft and Wizardry. ''But Harry has to move on.'' With a major character dying, tears may well break out around the globe over the next few days, but the age of Potter VI dawned at midnight Saturday with millions of smiles and a bit of a wink from Rowling. In Edinburgh, Scotland, the author emerged from behind a secret panel inside the city's medieval castle, settled into a leather easy chair and read an excerpt from the sixth chapter to a super-select group of 70 children from around the world. ''You get a lot of answers in this book,'' Rowling, a resident of Edinburgh, said as she arrived at the castle before thousands of adoring fans. ''I can't wait for everyone to read it.'' It was party time for Potter lovers. Elisabeth Grant-Gibson, co-owner of ''Windows a Bookshop'' in Monroe, La., said they did more than double a normal good day's business. The first copy went to 10-year-old Chloe Kaczvinsky, whose parents drove 30 miles to attend the store's Harry Potter Pajama Party. Her mother read the first chapter aloud during the ride home, and the second at home. Then her parents went to sleep. ''I asked Mom if I could read the book in bed. I stayed up to 5 and woke up at 8,'' Chloe said. In London, events were muted by the July 7 subway and bus bombings, which killed some 50 people. Book and magazine chain WH Smith scrapped a planned midnight launch at King's Cross Station, from whose fictional Platform 9 3/4 Harry catches the train to Hogwarts at the start of each term. The deadliest of the day's four attacks was on a subway near King's Cross. Still, hundreds of thousands of fans turned out to purchase Potter. In Dallas, about 200 of the faithful waited in the dark, mingling in an unlit parking lot, after storms knocked out power at a Barnes & Noble store. White horses posing as unicorns paraded down the main street of Wilmington, Ohio, where Books 'N' More quickly sold hundreds of Potters. Since Rowling first introduced Harry and his fellow students at Hogwarts to the world in 1997, the books have become a global phenomenon, selling 270 million copies in 62 languages and inspiring a series of movies. Rowling is now the richest woman in Britain, with a fortune estimated by Forbes magazine at $1 billion. With only brief interruptions, ''Half-Blood Prince'' has topped the charts of Amazon.com and Barnes & Noble.com since last December, when Rowling announced that she had completed it. Pre-orders worldwide were in the millions and even the audio book has been keeping pace with such blockbusters as Dan Brown's ''The Da Vinci Code'' and David McCullough's ''1776.'' The biggest glitch happened in Canada, where publisher Raincoast sought a court injunction after a Vancouver store accidentally sold 14 copies last week. A judge ordered customers not to discuss the book, copy it, sell it or read it before its release. The biggest gripes came in the U.S., not from critics, but from booksellers and environmentalists. Independent retailers were upset with Scholastic for selling the book on its web site at a 20 percent discount, more than many stores can afford. Environmentalists, meanwhile, were unhappy that Scholastic, unlike Raincoast, doesn't print the books on 100 percent recycled paper. ''We have some magic up our sleeves too,'' reads a message posted on the Web site of Greenpeace, ''a link to the Canadian publisher of `Harry Potter and the Half Blood Prince,' who can send you a tree-friendly version of this popular book.'' ------ AP reporters Jill Lawless, Cassandra Vinograd and Sarah Blaskovich in London; Catherine McAloon in Edinburgh, Scotland; and Meraiah Foley in Sydney, Australia contributed to this report. On the Net: [3]http://www.bloomsburymagazine.com [4]http://www.scholastic.com [5]http://www.harrypotter.com [6]http://www.jkrowling.com [7]http://www.greenpeace.org 4. http://www.scholastic.com/ 5. http://www.harrypotter.com/ -------------- Review: New 'Harry Potter' Packs a Punch http://www.nytimes.com/aponline/arts/AP-Harry-Potter-Review.html?pagewanted=print By THE ASSOCIATED PRESS Filed at 12:20 a.m. ET NEW YORK (AP) -- A word of caution to all those hard-core fans about to dive into the latest adventures of Harry Potter: There will be tears. Yours. It's odd to think of the next-to-last Potter book as being a turning point but, in so many ways, that's the truth about ''Harry Potter and the Half-Blood Prince.'' J.K. Rowling's hero is no longer a boy wizard; he's a young man, determined to seek out and face a young man's challenges. Veteran Potter readers shouldn't worry about outgrowing the series, but younger fans may find that it has grown up too much. All ages, however, should be assured: Rowling's latest has lost none of the charm, intelligence and hilarity that have catapulted her series into publishing history. But this book also has a poignancy, complexity and sadness we probably couldn't have imagined when we started reading the first one. There's an emotional punch you won't believe. When Book 5, ''Harry Potter and the Order of the Phoenix,'' ended, we learned what made the evil wizard Lord Voldemort kill Harry's parents and try to kill him as an infant -- a prophecy that said Harry could vanquish him, and that one of them would have to kill the other in order to survive. The wizarding world Rowling returns us to in Book 6 is a scarier place, even though only a few weeks have passed. Thanks to Harry and friends, the entire magical community knows that Voldemort is back. And how. He and his Death Eater followers have unleashed so much violence and murder that even the head of the non-magical world has to be told about it. Sooner than in her previous books, the action shifts Harry away from his awful relatives, the Dursleys, and right back to friends Ron and Hermione and all the others at Hogwarts School for Witchcraft and Wizardry. As usual, there are new faces -- the new teacher of Defense against the Dark Arts gives Harry some serious concerns. As sixth-year students, Harry and company have moved on from the magical basics into more complicated studies, preparing for their after-school careers. But it's not just the work that's gotten more complicated, it's everything. Friendships change, love arrives (this, thank goodness, should FINALLY end all those Internet fan site arguments about who is going to hook up with whom) and Harry learns a lot about his enemy -- about his past, and about a potential weakness. We also learn about Harry, about the man he is turning into, his character and his strength of will. But it wouldn't be Hogwarts without strangeness and mystery. Harry has his suspicions about who's trying to do what, and it all erupts in the end. Rowling shows off her mastery, leading us down a path with certain clues and still managing to blindside us about who turns on whom and who doesn't. And, yes, there is another MAJOR death. Seriously major. Break out the tissues. No matter how well you think you know these books, don't assume you really know who anyone is, or what they are and aren't capable of. This is a powerful, unforgettable setup for the finale. The hardest thing about ''Half-Blood Prince'' is where it leaves us -- in mourning for who has been lost, anxious to learn how Rowling will wrap up a saga that millions wish would go on and on. ------------------ Growing Up With Harry Potter http://www.nytimes.com/aponline/arts/AP-Growing-up-With-Harry.html By THE ASSOCIATED PRESS Filed at 2:28 a.m. ET RALEIGH, N.C. (AP) -- I was beginning to worry that my 12-year-old daughter might have outgrown Harry Potter -- or at least the excitement of Harry. When ''The Order of the Phoenix'' came out two years ago, Miana insisted we pre-order three months in advance. She begged my wife, Linda, to sew her a black Hogwarts robe, and we spent hours whittling her a wand -- with a whisker from our cat, Bear, as its ''magical core.'' We spent three hours at Borders playing games, doing face-painting and waiting in line to be one of the first with a book. But as Saturday's midnight release of ''The Half-Blood Prince'' approached, Miana, now a rising seventh grader, wasn't even sure she wanted to deal with all that. Two years and 26 days was a long time to be away from Hogwarts School of Witchcraft and Wizardry, so we'd had to find other ways to feed our fantasy. Miana started with Eoin Colfer's ''Artemis Fowl'' series, with its fairies, goblins and pixies. Then we introduced her to J.R.R. Tolkien's ''Lord of the Rings'' trilogy -- an epic tale of good and bad wizards, of dragons and trolls and goblins and elves, and of a resurgent ''dark lord.'' It was the perfect parallel to Potter, and Miana ate it up. For months, she could talk of nothing but enchanted rings, magic swords and elvish runes, and she seemed to have little time for Harry and his friends. Her Hogwarts robe sat balled up in a corner of her closet, wrinkled and forgotten. Then, about three weeks ago, she asked if I could call Borders and see what they had planned for this year. When I told her the clerk had promised ''crazy, crazy fun,'' she asked if we could sign up. Miana began marking the passage of time in weeks or days ''TH'' -- 'til Harry. On Thursday, Miana called me at work to finalize our plans, which by now included her two best friends, Katie and Amanda. ''I hope I can sleep tonight,'' she said. ''Maybe I should take a Benadryl.'' She did. But, in our defense, she DID have a stuffy nose. Friday morning, I jumped in the car and drove to Borders to be there when they opened at 9 o'clock and get a low number for the line to pick up our book at midnight. When I returned home, Miana and Linda were waiting at the door. ''What number did we get?'' Miana asked. ''They said we'd have to come back later,'' I said. ''But I have a couple of surprises to tide you over.'' I made a big show of pulling out a box of Bertie Bott's Every Flavor beans (with new flavors rotten egg and bacon), a chocolate frog complete with wizard trading card and ... a purple ticket with the number ''0001.'' Miana danced around the kitchen chanting, ''We're No. 1! We're No. 1!'' She and her friends exchanged phone calls throughout the day to confer on wardrobes and hairstyles. Katie, with her beautiful red hair and a black graduation robe from Goodwill, would be Ginny Weasley. Amanda, with a purple robe and her hair dyed raven-black, would be Harry's crush, Cho Chang. Linda had spent the evening before braiding Miana's hair to achieve that Hermione Granger frizziness. ''I'm hyper, I'm hyper,'' Miana said, bouncing up and down. ''Today can't go by fast enough!'' But when we got to the store around 9:30 p.m., the girls were already wondering whether they'd made a mistake dressing up. So few seemed to be in costume this year. As the minutes ticked by toward midnight, the three sat in a corner of the bookstore, eating oversized chocolate chip cookies and thumbing through a stack of J-14 and Tiger Beat magazines for the latest gossip on Orlando Bloom and Lindsay Lohan. Their minds seemed to be on anything but a boy with a lightning scar on his forehead. ''Am I having fun yet?'' Miana asked with that look of ennui that only a tween can muster. ''Because if I am, my face hasn't caught up with my brain.'' Maybe we just should've gone to Wal-Mart at midnight and dispensed with all the hoopla. Twelve suddenly seemed much too old for face-painting and hat-making. Then the store manager announced it was time to line up. ''Well,'' Miana said, giving me a shove. ''Get in line, buddy.'' Soon she had grabbed my cell phone and was counting down the minutes. At 11:55, Linda turned to me with a sad expression on her face. ''I'm already dreading the day when we finish the book,'' she said. ''Because then it will be over again.'' ''Are you going to cry?'' Miana asked with a look that said, ''You'd better not.'' When the manager announced at 11:58 that the books were being brought up to the registers, Miana was up on her tiptoes dancing a jig. ''It's 12 o'clock, it's 12 o'clock, it's 12 o'clock,'' she said, handing me back the phone. ''Get your ticket.'' A couple of minutes later, Miana was leaving the store, the new book clutched tightly to her chest. As we drove home under a brilliant yellow half moon, we listened to the book on compact disc. The three girls who just a few hours earlier couldn't have cared less about witches and warlocks sat transfixed in the back seat, not uttering a word. ------ EDITOR'S NOTE: Allen G. Breed is the AP's Southeast regional writer, based in Raleigh. -------------- Rowling Promises Answers in Potter Book http://www.nytimes.com/aponline/arts/AP-Harry-Potter-Castle.html July 15, 2005 By THE ASSOCIATED PRESS Filed at 10:37 p.m. ET EDINBURGH, Scotland (AP) -- As midnight drew near, Harry Potter's creator arrived at the glowering medieval castle, prepared to crack open the next adventure the world has been waiting for. J.K. Rowling refused to give away plot details from ''Harry Potter and the Half-Blood Prince'' as she walked the red carpet Friday outside Edinburgh castle, where thousands of fans eagerly waited to watch her read on large video screens. ''You get a lot of answers in this book,'' Rowling said. ''I can't wait for everyone to read it.'' Seventy lucky young fans from around the world were spirited in carriages up cobbled streets into the 11th-century fortress, which was illuminated with neon lights and blazing torches. The carriages were drawn by black and white horses adorned with ostrich plumes, and driven by coachmen wearing capes and black top hats. Inside, lantern-bearing prefects led the 70 to the Queen Anne building, transformed for the evening into the entrance hall of Hogwarts School of Witchcraft and Wizardry. When the clock struck 12, Rowling emerged from behind a secret panel, settled into a leather easy chair and read an excerpt from the sixth chapter to the spellbound group. After the brief reading, the children erupted in screams and applause. The 70 won competitions to report on the book launch for their local newspapers. The fans outside were drawn from schools here in the Scottish capital, where the 39-year-old Rowling lives. Along with her best friend, Minnie Mass, an 18 year old from Miami, Florida, found her way to the launch party through hard work. They weren't competition winners, but were allowed inside the castle after lining up for several hours and stopping Rowling for a chat as she made her way down the red carpet. ''We are waitresses in Miami Beach. We worked for a year to get over here. We came last night and we are leaving tomorrow,'' said Mass, who is originally from Parana, Brazil. She said Harry Potter was the first book she read in English after learning the language in the United States: ''I told (Rowling) it was the first book I read in English and she was really touched.'' Chelsea Kennedy, 16, from Toronto, Canada, had only one criticism of the extravaganza. ''I wished it had been longer,'' she said. --------------------- Potter Fans, it's worth it to be harried. Newsday, July 15, 2005 http://www.newsday.com/news/columnists/ny-nyhen154344021jul15,0,5037324,print.column?coll=ny-news-columnists Hours ahead of schedule, I can now reveal the shocking contents of "Harry Potter and the Half-Blood Prince." Get ready now, you Muggles (that's Potter-talk for people without magical powers). If you aren't prepared to hear the latest adventures of the world's most beloved boy wizard with the lightning-bolt scar - well, you'd probably better stop reading right here. Wait! Only kidding! I don't really have the new Potter book, which is set for formal release right after midnight. I just thought it might be fun to watch those ghoulish enforcers at the Scholastic book division go totally bonkers when they heard that someone had blown their Embargo of Doom. I'll bet those jittery publishing flacks are pressing the panic buttons right now, hyperventilating into their cell phones, interrogating innocent bookstore clerks, sending high-priced legal teams off in search of court injunctions - all to protect their precious 12:01 a.m. release. Ha! I got you this time - your own personal Lord Voldemort, casting evil spells on Harry's publishing-industry control freaks! Now, I agree the Potter books are great. I love how the J.K. Rowling series has created such excitement for reading among the young. Pretty much anything that gets the kids reading, I am fully in favor of. It could be recipes or mattress tags or cereal-box ingredient lists, for all I care. God knows we need something to yank the kiddies away the video games and the cable TV. If it's the Hogwarts School of Witchcraft and Wizardry, I'll sing the fight song right now. And these are real books with complex characters and rollicking plots. We've truly come a long way from "The Hardy Boys," the formulaic drivel that I was reading when I was Potter-age. With 260 million sold, there's no denying the power of the Harry brand. The last one, "Harry Potter and the Order of the Phoenix," was the fastest seller in U.S. publishing history, 5 million copies in the first 24 hours. The new one has gotten the grandest launch ever, 10.8 million up front. All of which is very good reason for the Potter publishing zealots, and thoughtless Potter critics, to just chill out. The critics first: Last time around, we had some fundamentalist Christian ministers throwing Harry Potter book-burnings. They complained that the boy wizard was teaching devil worship to America's innocent young. This was so absurd, it's hardly worth answering now, and thankfully we haven't heard much this time from the holy-roller-hate-Harry crowd. Unfortunately, they've been replaced by the new pope. Even before the new book appears, Pope Benedict XVI has already filed his series review. The Harry Potter books, he wrote, "distort Christianity in the soul." The comments were made in a 2003 letter to German author (and previous Potter-basher) Gabriele Kuby, back when Benedict was still a cardinal. "It is good that you enlighten people about Harry Potter because these are subtle seductions which act unnoticed and by this deeply distort Christianity in the soul before it can grow properly," he wrote from the Vatican, apparently forgetting that children for centuries have been delighted (not harmed) by fantasy and imagination. But just as this kind of stuff is generating a broad Harry defense, his iron-fisted publishers risk soiling all the good feeling, just as the new book comes out. Scholastic reps have been harrumphing about a handful of Potter leaks. On Monday, a 9-year-old boy bought a copy from an Eckerd's drugstore in Kingston, N.Y. Conscience-stricken, he returned the book after reading just two pages. And two Indianapolis men snagged a copy from a downtown bookstore. "I thought I was seeing things," Tim Meyer told the Indianapolis Star. "I asked the lady, 'Can I buy this now?' and she was like 'Yeah' and just rang it up." Meyer's already on Chapter 18, he said, pronouncing the book a fast read. "What J.K. Rowling is telling you is pretty shocking, considering the last five books." So what can you say? You can say it's even worse in Canada. The Canadian distributor, Raincoast Books, was so upset when a store outside Vancouver mistakenly sold 20 copies, the distributor got a gag order from the British Columbia Supreme Court, forbidding buyers from revealing the plot. No American courts have issued pre-publication prior-restraint orders yet in defense of the big-dollar PR campaign. But as I mentioned, we - and Harry - still have a few hours to go. From checker at panix.com Sun Jul 17 00:09:04 2005 From: checker at panix.com (Premise Checker) Date: Sat, 16 Jul 2005 20:09:04 -0400 (EDT) Subject: [Paleopsych] JPSP: Different Emotional Reactions to Different Groups Message-ID: Different Emotional Reactions to Different Groups: A Sociofunctional Threat-Based Approach to "Prejudice" [INTERPERSONAL RELATIONS AND GROUP PROCESSES] Cottrell, Catherine A.1,2; Neuberg, Steven L.1,3 Journal of Personality and Social Psychology Volume 88(5), May 2005, p. 770-789 [This journal is put out by the American Psychological Association, the same group that publishes _Psychology, Public Policy, and Law_ that featured the Rushton-Jensen article in its June issue. Thanks to Ted for alterting us to this article. [First, some summaries: Scholars: Prejudice a Complex Mechanism Rooted in the Genes http://theoccidentalquarterly.com/news/printer.php?id=1041 Posted on: 2005-07-05 19:45:18 A recent study in the Journal of Personality and Social Psychology (May 2005) defies longstanding social dogma to suggest that prejudice, or an aversion to members of different groups, is genetically based and arose to enable both group and individual survival. Arizona St. University Professor Steven Neuberg and ASU graduate student Catherine Cottrell, in Different Emotional Reactions to Different Groups: A Sociofunctional Threat-Based Approach to Prejudice, describe their study of assessments by 235 European-American students at ASU of possible societal threats posed by nine different groupsactivist feminists, African-Americans, Asian-Americans, European-Americans, fundamentalist Christians, gay men, Mexican-Americans, Native Americans, and nonfundamentalist Christiansand the emotions registered by the students at perceived threats associated with the different groups. Rather than undifferentiated hostility to the other, Neuberg and Cottrell found that different types of threatphysical, ideological, or healthevoked different emotions (fear, anger, disgust). Neuberg interprets these nuances as rooted in real threats that led to an evolutionary response: It was adaptive for our ancestors to be attuned to those outside the group who posed threats such as to physical security, health or economic resources, and to respond to these different kinds of threats in ways tailored to have a good chance of reducing them. Whether Neuberg and Cottrells findings will help to root out the calcified prejudices of such citadels of professed anti-prejudice as the Anti-Defamation League, which continues to proclaim that Hate is learned, remains to be seen. References 1. http://www.physorg.com/news4341.html 2. http://www.asu.edu/news/research/prejudicestudy_053105.htm 3. http://content.apa.org/journals/psp/88/5 4. http://www.adl.org/issue_education/hateprejudice/Prejudice2.asp ------------------ Human prejudice in humans has evolved http://www.physorg.com/news4341.html 5.7.1 Contrary to what most people believe, the tendency to be prejudiced is a form of common sense, hard-wired into the human brain through evolution as an adaptive response to protect our prehistoric ancestors from danger. So suggests a new study published by ASU researchers in the May issue of the Journal of Personality and Social Psychology, which contends that, because human survival was based on group living, "outsiders" were viewed as and often were very real threats. By nature, people are group-living animals a strategy that enhances individual survival and leads to what we might call a tribal psychology, says Steven Neuberg, ASU professor of social psychology, who wrote the study with doctoral student Catherine Cottrell. It was adaptive for our ancestors to be attuned to those outside the group who posed threats such as to physical security, health or economic resources, and to respond to these different kinds of threats in ways tailored to have a good chance of reducing them. Unfortunately, says Neuberg, because evolved psychological tendencies are imperfectly attuned to the existence of dangers, people might react negatively to groups and their members even when they pose no realistic threat. Neuberg and Cottrell had 235 European-American students at ASU think about nine different groups: activist feminists, African-Americans, Asian-Americans, European-Americans, fundamentalist Christians, gay men, Mexican-Americans, Native Americans and nonfundamentalist Christians. The researchers then had the participants rate these groups on the threats they pose to American society (e.g., to physical safety, values, health, etc.) and report the emotions they felt toward these groups (e.g., fear, anger, disgust, pity, etc.). Consistent with the researchers hypotheses, findings revealed that distinct prejudices exist toward different groups of people. Some groups elicited prejudices characterized largely by fear, others by disgust, others by anger, and so on. Moreover, the different flavors of prejudice were associated with different patterns of perceived threat. Follow-up work further shows that these different prejudices motivate inclinations toward different kinds of discrimination, in ways apparently aimed at reducing the perceived threat. Groups seen as posing threats to physical safety elicit fear and self-protective actions, Cottrell says. Groups seen as choosing to take more than they give elicit anger and inclinations toward aggression, and groups seen as posing health threats elicit disgust and the desire to avoid close physical contact. One important practical implication of this research is that we may need to create different interventions to reduce inappropriate prejudices against different groups, Neuberg says. For example, if one is trying to decrease prejudices among new college students during freshman orientation, different strategies might be used for bringing different groups together. Neuberg and Cottrell are adamant to point out that just because prejudices are a fundamental and natural part of what makes us human doesnt mean that learning cant take place and that responses cant be dampened. People sometimes assume that, because we say prejudice has evolved roots, we are saying that specific prejudices cant be changed. Thats simply not the case, Neuberg says. What we think and feel and how we behave is typically the result of complex interactions between biological tendencies and learning experiences. Evolution may have prepared our minds to be prejudiced, but our environment influences the specific targets of those prejudices. Source: Arizona State University -------------------- ASU News > Human prejudice has evolved, say ASU researchers http://www.asu.edu/news/research/prejudicestudy_053105.htm [9]Sharon Keeler, sharon.keeler at asu.edu (480) 965-4012 June 1, 2005 Human prejudice has evolved, say ASU researchers Our environment influences the specific targets of those prejudices and how we act on them Could it be that the tendency to be prejudiced evolved as an adaptive response to protect our prehistoric ancestors from danger? So suggest Arizona State University researchers in a new study in the "Journal of Personality and Social Psychology," in which they contend that, because human survival was based on group living, "outsiders" were viewed as - and often were - very real threats. "By nature, people are group-living animals - a strategy that enhances individual survival and leads to what we might call a `tribal psychology'," says Steven Neuberg, ASU professor of social psychology, who authored the study with doctoral student Catherine Cottrell. "It was adaptive for our ancestors to be attuned to those outside the group who posed threats such as to physical security, health or economic resources, and to respond to these different kinds of threats in ways tailored to have a good chance of reducing them." Unfortunately, says Neuberg, because evolved psychological tendencies are imperfectly attuned to the existence of dangers, people may react negatively to groups and their members even when they actually pose no realistic threat. Neuberg and Cottrell point out that just because prejudices are a fundamental and natural part of what makes us human, that doesn't mean that learning can't take place and that responses can't be dampened. "People sometimes assume that because we say prejudice has evolved roots we are saying that specific prejudices can't be changed. That's simply not the case," Neuberg says. "What we think and feel and how we behave is typically the result of complex interactions between biological tendencies and learning experiences. Evolution may have prepared our minds to be prejudiced, but our environment influences the specific targets of those prejudices and how we act on them." For their study, Neuberg and Cottrell had 235 European American students at ASU think about nine different groups: activist feminists, African Americans, Asian Americans, European Americans, fundamentalist Christians, gay men, Mexican Americans, Native Americans and nonfundamentalist Christians. The researchers then had the participants rate these groups on the threats they pose to American society (e.g., to physical safety, values, health, etc.) and report the emotions they felt toward these groups (e.g., fear, anger, disgust, pity, etc.). Consistent with the researchers' hypotheses, findings revealed that distinct prejudices exist toward different groups of people. Some groups elicited prejudices characterized largely by fear, others by disgust, others by anger, and so on. Moreover, the different "flavors" of prejudice were associated with different patterns of perceived threat. Follow-up work further shows that these different prejudices motivate inclinations toward different kinds of discrimination, in ways apparently aimed at reducing the perceived threat. "Groups seen as posing threats to physical safety elicit fear and self-protective actions, groups seen as choosing to take more than they give elicit anger and inclinations toward aggression, and groups seen as posing health threats elicit disgust and the desire to avoid close physical contact," says Cottrell. "One important practical implication of this research is that we may need to create different interventions to reduce inappropriate prejudices against different groups," says Neuberg. Keeler, with Marketing & Strategic Communications, can be reached at (480) 965-4012 or (sharon.keeler at asu.edu). -------------- PsycARTICLES - Journal of Personality and Social Psychology - Vol 88, Issue 5 http://content.apa.org/journals/psp/88/5 [Other interesting stuff in this issue, so I'll give the summaries. Let me know if you'd like to get a copy of some specific article.] 1. Counterfactual Thinking and the First Instinct Fallacy. By Kruger, Justin; Wirtz, Derrick; Miller, Dale T. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 725-735 Most people believe that they should avoid changing their answer when taking multiple-choice tests. Virtually all research on this topic, however, has suggested that this strategy is ill-founded: Most answer changes are from incorrect to correct, and people who change their answers usually improve their test scores. Why do people believe in this strategy if the data so strongly refute it? The authors argue that the belief is in part a product of counterfactual thinking. Changing an answer when one should have stuck with one's original answer leads to more "if only . . ." self-recriminations than does sticking with one's first instinct when one should have switched. As a consequence, instances of the former are more memorable than instances of the latter. This differential availability provides individuals with compelling (albeit illusory) personal evidence for the wisdom of always following their 1st instinct, with suboptimal test scores the result. 2. Feeling and Believing: The Influence of Emotion on Trust. By Dunn, Jennifer R.; Schweitzer, Maurice E. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 736-748 The authors report results from 5 experiments that describe the influence of emotional states on trust. They found that incidental emotions significantly influence trust in unrelated settings. Happiness and gratitude--emotions with positive valence--increase trust, and anger--an emotion with negative valence--decreases trust. Specifically, they found that emotions characterized by other-person control (anger and gratitude) and weak control appraisals (happiness) influence trust significantly more than emotions characterized by personal control (pride and guilt) or situational control (sadness). These findings suggest that emotions are more likely to be misattributed when the appraisals of the emotion are consistent with the judgment task than when the appraisals of the emotion are inconsistent with the judgment task. Emotions do not influence trust when individuals are aware of the source of their emotions or when individuals are very familiar with the trustee. 3. Attitude Importance and the Accumulation of Attitude-Relevant Knowledge in Memory. By Holbrook, Allyson L.; Berent, Matthew K.; Krosnick, Jon A.; Visser, Penny S.; Boninger, David S. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 749-769 People who attach personal importance to an attitude are especially knowledgeable about the attitude object. This article tests an explanation for this relation: that importance causes the accumulation of knowledge by inspiring selective exposure to and selective elaboration of relevant information. Nine studies showed that (a) after watching televised debates between presidential candidates, viewers were better able to remember the statements made on policy issues on which they had more personally important attitudes; (b) importance motivated selective exposure and selective elaboration: Greater personal importance was associated with better memory for relevant information encountered under controlled laboratory conditions, and manipulations eliminating opportunities for selective exposure and selective elaboration eliminated the importance-memory accuracy relation; and (c) people do not use perceptions of their knowledge volume to infer how important an attitude is to them, but importance does cause knowledge accumulation. 4. Different Emotional Reactions to Different Groups: A Sociofunctional Threat-Based Approach to "Prejudice". By Cottrell, Catherine A.; Neuberg, Steven L. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 770-789 The authors suggest that the traditional conception of prejudice--as a general attitude or evaluation--can problematically obscure the rich texturing of emotions that people feel toward different groups. Derived from a sociofunctional approach, the authors predicted that groups believed to pose qualitatively distinct threats to in-group resources or processes would evoke qualitatively distinct and functionally relevant emotional reactions. Participants' reactions to a range of social groups provided a data set unique in the scope of emotional reactions and threat beliefs explored. As predicted, different groups elicited different profiles of emotion and threat reactions, and this diversity was often masked by general measures of prejudice and threat. Moreover, threat and emotion profiles were associated with one another in the manner predicted: Specific classes of threat were linked to specific, functionally relevant emotions, and groups similar in the threat profiles they elicited were also similar in the emotion profiles they elicited. 5. Policewomen Acting in Self-Defense: Can Psychological Disengagement Protect Self-Esteem From the Negative Outcomes of Relative Deprivation? By Tougas, Francine; Rinfret, Natalie; Beaton, Ann M.; de la Sablonni?re, Roxane Journal of Personality and Social Psychology. 2005 May Vol 88(5) 790-800 The role of 2 components of psychological disengagement (discounting and devaluing) in the relation between personal relative deprivation and self-esteem was explored in 3 samples of policewomen. Path analyses conducted with the 3 samples revealed that stronger feelings of personal relative deprivation resulted in stronger discounting of work evaluations, which in turn led to devaluing the importance of police work. A negative relation between discounting and self-esteem was observed in all samples. Other related outcomes of disengagement, professional withdrawal and stress, were also evaluated. 6. Self-Esteem and Favoritism Toward Novel In-Groups: The Self as an Evaluative Base. By Gramzow, Richard H.; Gaertner, Lowell Journal of Personality and Social Psychology. 2005 May Vol 88(5) 801-815 The self-as-evaluative base (SEB) hypothesis proposes that self-evaluation extends automatically via an amotivated consistency process to affect evaluation of novel in-groups. Four minimal group studies support SEB. Personal trait self-esteem (PSE) predicted increased favoritism toward a novel in-group that, objectively, was equivalent to the out-group (Study 1). This association was independent of information-processing effects (Study 1), collective self-esteem, right-wing authoritarianism (RWA), and narcissism (Studies 2 and 3). A self-affirmation manipulation attenuated the association between in-group favoritism and an individual difference associated with motivated social identity concerns (RWA) but did not alter the PSE effect (Study 3). Finally, the association between PSE and in-group favoritism remained positive even when the in-group was objectively less favorable than the out-group (Study 4). 7. Having an Open Mind: The Impact of Openness to Experience on Interracial Attitudes and Impression Formation. By Flynn, Francis J. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 816-826 This article considers how Openness to Experience may mitigate the negative stereotyping of Black people by White perceivers. As expected, White individuals who scored relatively high on Openness to Experience exhibited less prejudice according to self-report measures of explicit racial attitudes. Further, White participants who rated themselves higher on Openness to Experience formed more favorable impressions of a fictitious Black individual. Finally, after observing informal interviews of White and Black targets, White participants who were more open formed more positive impressions of Black interviewees, particularly on dimensions that correspond to negative racial stereotypes. The effect of Openness to Experience was relatively stronger for judgments of Black interviewees than for judgments of White interviewees. Taken together these findings suggest that explicit racial attitudes and impression formation may depend on the individual characteristics of the perceiver, particularly whether she or he is predisposed to consider stereotype-disconfirming information. 8. Resilience to Loss in Bereaved Spouses, Bereaved Parents, and Bereaved Gay Men. By Bonanno, George A.; Moskowitz, Judith Tedlie; Papa, Anthony; Folkman, Susan Journal of Personality and Social Psychology. 2005 May Vol 88(5) 827-843 Recent research has indicated that many people faced with highly aversive events suffer only minor, transient disruptions in functioning and retain a capacity for positive affect and experiences. This article reports 2 studies that replicate and extend these findings among bereaved parents, spouses, and caregivers of a chronically ill life partner using a range of self-report and objective measures of adjustment. Resilience was evidenced in half of each bereaved sample when compared with matched, nonbereaved counterparts and 36% of the caregiver sample in a more conservative, repeated-measures ipsative comparison. Resilient individuals were not distinguished by the quality of their relationship with spouse/partner or caregiver burden but were rated more positively and as better adjusted by close friend. 9. Gender Similarities and Differences in Children's Social Behavior: Finding Personality in Contextualized Patterns of Adaptation. By Zakriski, Audrey L.; Wright, Jack C.; Underwood, Marion K. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 844-855 This research examined how a contextualist approach to personality can reveal social interactional patterns that are obscured by gender comparisons of overall behavior rates. For some behaviors (verbal aggression), girls and boys differed both in their responses to social events and in how often they encountered them, yet they did not differ in overall behavior rates. For other behaviors (prosocial), gender differences in overall rates were observed, yet girls and boys differed more in their social environments than in their responses to events. The results question the assumption that meaningful personality differences must be manifested in overall act trends and illustrate how gender differences in personality can be conceptualized as patterns of social adaptation that are complex and context specific. 10. The Factor Structure of Greek Personality Adjectives. By Saucier, Gerard; Georgiades, Stelios; Tsaousis, Ioannis; Goldberg, Lewis R. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 856-875 Personality descriptors--3,302 adjectives--were extracted from a dictionary of the modern Greek language. Those terms with the highest frequency were administered to large samples in Greece to test the universality of the Big-Five dimensions of personality in comparison to alternative models. One- and 2-factor structures were the most stable across variable selections and subsamples and replicated such structures found in previous studies. Among models with more moderate levels of replication, recently proposed 6- and 7-lexical-factor models were approximately as well replicated as the Big Five. An emic 6-factor structure showed relative stability; these factors were labeled Negative-Valence/Honesty, Agreeableness/Positive Affect, Prowess/Heroism, Introversion/Melancholia, Even Temper, and Conscientiousness. ---------------------------- Journal of Personality and Social Psychology Volume 88(5), May 2005, p. 770-789 Different Emotional Reactions to Different Groups: A Sociofunctional Threat-Based Approach to ?Prejudice? [INTERPERSONAL RELATIONS AND GROUP PROCESSES] Cottrell, Catherine A.1,2; Neuberg, Steven L.1,3 1Department of Psychology, Arizona State University. 2 Correspondence concerning this article should be addressed to Catherine A. Cottrell, Department of Psychology, Arizona State University, Tempe, AZ 85287-1104. E-mail: catherine.cottrell at asu.edu 3 Correspondence concerning this article should be addressed to Steven L. Neuberg, Department of Psychology, Arizona State University, Tempe, AZ 85287-1104. E-mail: steven.neuberg at asu.edu Outline * Abstract * A Sociofunctional Approach * The Goal Relevance of Discrete Emotions * From Group-Relevant Threats to Discrete Emotions * Hypotheses * Other Contemporary Emotion- and Threat-Based Approaches to Prejudice * Method * Participants * Procedure * Measures * Affective Reactions * Threat Perceptions * Results * Composite Scores and Difference Scores * Tests of Hypotheses * Hypothesis 1: Different Groups Can Evoke Qualitatively Different Profiles of Emotional Reactions * Hypothesis 2: Measures of Prejudice as Traditionally Conceived Will Often Mask the Variation Across Groups in Evoked Emotion Profiles * Hypothesis 3: Different Groups Can Evoke Qualitatively Different Profiles of Perceived Threats * Hypothesis 4: General Measures of Threat Will Often Mask the Variation Across Groups in Evoked Threat Profiles * Hypothesis 5: Profiles of the Specific Threats Posed by Different Groups Will Reliably and Systematically Predict the Emotion Profiles Evoked by These Groups * Multiple regression approach * Cluster analytic approach * Discussion * Contributions of the Present Data * Related Theoretical Perspectives * Specificity of Emotion, Specificity of Threat * Alternative Appraisal Theories * Theoretical Breadth * Closing Remarks * References We offer special thanks to Terrilee Asher and the members of the Friday afternoon research seminar for their contributions to the early development of these ideas, and to Eliot Smith, Jon Maner, Aaron Taylor, and Amy Cuddy for their helpful suggestions and comments on previous versions of this article. Received Date: January 9, 2004; Revised Date: August 3, 2004; Accepted Date: September 15, 2004 Abstract The authors suggest that the traditional conception of prejudice?as a general attitude or evaluation?can problematically obscure the rich texturing of emotions that people feel toward different groups. Derived from a sociofunctional approach, the authors predicted that groups believed to pose qualitatively distinct threats to in-group resources or processes would evoke qualitatively distinct and functionally relevant emotional reactions. Participants' reactions to a range of social groups provided a data set unique in the scope of emotional reactions and threat beliefs explored. As predicted, different groups elicited different profiles of emotion and threat reactions, and this diversity was often masked by general measures of prejudice and threat. Moreover, threat and emotion profiles were associated with one another in the manner predicted: Specific classes of threat were linked to specific, functionally relevant emotions, and groups similar in the threat profiles they elicited were also similar in the emotion profiles they elicited. Jews are shrewd, religious, and wealthy. African Americans are noisy, athletic, and ?have an attitude.? Italians are loyal to family, loud, and tradition loving. And the Irish are talkative, happy-go-lucky, and quick tempered. These stereotypes, recently endorsed by American college students (Madon et al., 2001 ), straightforwardly demonstrate that people hold different beliefs about different groups. Researchers have long recognized this and have been documenting since the 1930s the diversity of stereotypes used to describe different groups (e.g., Devine & Elliot, 1995; Gilbert, 1951; Karlins, Coffman, & Walters, 1969; Katz & Braly, 1933; Niemann, Jennings, Rozelle, Baxter, & Sullivan, 1994). Researchers have seemingly been less interested, however, in the diversity of people's feelings toward different groups. Although Allport (1954) noted that negative prejudice can include specific ?feelings of scorn or dislike, of fear or aversion? (p. 7), his own theorizing focused more on his macroscopic characterization of negative prejudice as an unfavorable feeling toward a group and its members. This latter conceptualization of prejudice, as a general attitude or evaluation, has long dominated the research literature and has been the focus of most theoretical and empirical approaches designed to explicate the origins, operations, and implications of intergroup feelings (for a review, see Brewer & Brown, 1998 ). As useful as this global view of prejudice has been, we believe there is great value in contemplating seriously Allport's more textured observation?that just as people may hold qualitatively distinct beliefs about different groups, they may feel qualitatively distinct emotions toward different groups. A small set of researchers has begun to explore this possibility (e.g., Brewer & Alexander, 2002; Dijker, 1987; Esses, Haddock, & Zanna, 1993; Fiske, Cuddy, Glick, & Xu, 2002; Mackie, Devos, & Smith, 2000 ); we review their approaches below. Our own belief in the importance of understanding the textured emotional reactions people have toward members of other groups emerges as an implication of a broader ?sociofunctional? approach we have been developing to better account for a range of intragroup and intergroup phenomena (e.g., Neuberg, Smith, & Asher, 2000).1 To anticipate our argument, we suggest that the specific feelings people have toward members of other groups should depend on the specific tangible threats they see these other groups as posing: From qualitatively different threats should emerge qualitatively different, and functionally relevant, emotions. From this perspective, the concept of prejudice as general attitude is inherently problematic: Because the traditional prejudice construct aggregates across qualitatively different emotional reactions (e.g., anger, fear, disgust, pity, admiration, guilt)?each with its often distinct eliciting conditions, phenomenologies, facial expressions, neurologic structures, physiological patterns, and correlated behavioral propensities?it may obscure the rich texturing of emotional reactions people have toward different groups. Consequently, an exclusive focus on this traditional conceptualization of prejudice is likely to hinder the development of effective theory and practical intervention. A Sociofunctional Approach By their nature, people are group-living animals. According to many anthropologists, environmental challenges present in our evolutionary past propelled ancestral humans toward life in highly interdependent and cooperative groups (e.g., Leakey & Lewin, 1977). This ?ultrasociality? (Campbell, 1982), ?hypersociality? (Richerson & Boyd, 1995), or ?obligatory interdependence? (Brewer, 2001 ) likely evolved as a means to maximize individual success: An individual was presumably able to gain more essential resources (e.g., food, water, shelter, mates) and achieve more important goals (e.g., child rearing, self-protection) by living and working with other individuals in the context of a group compared with living and working by oneself. Interdependent group living, then, can be seen as an adaptation?perhaps the most important adaptation (Barchas, 1986; Brewer, 1997; Brewer & Caporael, 1990; Leakey, 1978 )??designed? to protect the human individual from the environment's many dangers while also supporting the effective exploitation of the environment's many opportunities.2 Group life has its costs, however (e.g., R. D. Alexander, 1974; Dunbar, 1988 ). For instance, group living surrounds one with individuals able to physically harm fellow group members, to spread contagious disease, or to ?free ride? on their efforts. A commitment to sociality thus carries a risk: If threats such as these are left unchecked, the costs of sociality will quickly exceed its benefits. Thus, to maximize the returns on group living, individual group members should be attuned to others' features or behaviors that characterize them as potential threats. We note two distinct levels at which group members may threaten each other. The benefits of group living depend not merely on the presence of others but on the effective coordination of these individuals into a well-functioning group. Individual group members should thus be attuned not only to those features and behaviors of others that heuristically characterize them as direct threats to one's personal success but also to those features and behaviors of others that heuristically characterize them as threats to group success, which are our focus here. This latter sensitivity to group-directed threats should be especially acute for those highly invested in, and dependent on, their groups. What events signal to individuals that the functioning of their group may be compromised? Because groups enhance individual success by providing members with valuable resources, members should be attuned to potential threats to group-level resources such as territory, physical security, property, economic standing, and the like. They should also be attuned to those group structures and processes that support the group's operational integrity?to those structures and processes that encourage effective and efficient group operations. Effective groups tend to possess strong norms of reciprocity, trust among members, systems of effective communication, authority structures for organizing individual effort and distributing group resources, common values, mechanisms for effectively educating and socializing members, members with strong in-group social identities, and the like (e.g., Brown, 1991 ). Individual group members should thus be especially attuned to potential threats to reciprocity (because others are either unwilling or unable to reciprocate), trust, value systems, socialization processes, authority structures, and so on (Neuberg et al., 2000 ). Finally, mere attunement to threats cannot be enough: Vigilance must be accompanied by psychological responses that function to minimize?or even eliminate?recognized threats and their detrimental effects. In sum, the sociofunctional approach is based on three simple, but fundamental, propositions: (a) Humans evolved as highly interdependent social beings; (b) effectively functioning groups tend to possess particular social structures and processes; and (c) individuals possess psychological mechanisms ?designed? by biological and cultural evolution to take advantage of the opportunities provided by group living and to protect themselves from threats to group living. Ongoing research has used this approach to successfully predict the traits people most value for members of different social groups and the impressions of themselves they most want to present to others, to generate hypotheses regarding the nature of gossip and other forms of communicated social information, and to motivate explorations of similarities in formal systems of social control across religious and criminal justice systems (e.g., Cottrell & Neuberg, 2004; Cottrell, Neuberg, & Li, 2003; Neuberg & Story, 2003 ). Here we use the sociofunctional approach, in conjunction with theory and empirical findings on the goal-relevance of discrete emotions, to generate specific predictions about the threat-driven nature of intergroup affect. The Goal Relevance of Discrete Emotions Emotions are critical to the natural goal-seeking process. They signal the presence of circumstances that threaten or profit important goals (e.g., Carver & Scheier, 1990; Ekman & Davidson, 1994; Higgins, 1987; Simon, 1967) and direct and energize behavior toward the remediation of such threats or the exploitation of such benefits (e.g., Cosmides & Tooby, 2000; Ekman, 1999; Nesse, 1990; Plutchik, 1980, 2003; Tooby & Cosmides, 1990 ). Emotions organize and coordinate ongoing psychological action (e.g., attention, motivation, memory, behavioral inclinations) so that people might respond more effectively to events related to individual survival and success. There is a functional specificity to the emotional system: Different events evoke different emotions. A shadowy figure quickly emerging from a dark alley?a problem related to personal security?elicits fear, whereas the theft of one's car?a problem related to personal resources?elicits anger. Moreover, distinct emotions are affiliated with specific physiological, cognitive, and behavioral tendencies, all of which operate to facilitate resolution of the problem. For example, the fear felt toward the unfamiliar figure triggers psychological and physical activity aimed at promoting escape from the potentially threatening situation, whereas the anger felt toward the property thief triggers activity aimed at promoting retrieval of the lost goods. Emotions researchers have theorized about the perceived stimulus event classes that elicit qualitatively distinct emotions and action tendencies (e.g., Ekman & Friesen, 1975; Frijda, 1986; Izard, 1991; Lazarus, 1991; Nesse, 1990; Plutchik, 1980) and have arrived at some consensus. Table 1 highlights the links among perceived stimulus event classes, discrete emotions, action tendencies, and resulting functional outcomes for an illustrative set of emotions. For example, perceiving the obstruction of valuable goals or the taking of valuable resources produces anger and a tendency to aggress, perceiving physical or moral contamination produces disgust and a tendency to expel the contaminated object or idea, and perceiving a threat to physical safety produces fear and a tendency to flee. These first three emotions?anger, disgust, and fear?are often considered basic emotions, shaped by natural selection to automatically address recurrent survival-related problems (Ekman, 1999). Table 1 An Evolutionary Approach to Emotions Pity, envy, and guilt, on the other hand, involve more complex cognitive appraisals of social situations. These emotional reactions nonetheless progress the individual toward important adaptive outcomes. Pity (as part of the sympathy family of emotions) is hypothesized to be an important emotional response involved in the regulation of the human altruistic system (Trivers, 1971 ), because it may motivate prosocial behavior toward others who are temporarily disadvantaged for reasons beyond their control, thereby generating gratitude from the recipient and subsequent reciprocity of the assistance back to the helper in the future. Envy results from feelings of being deprived of valuable resources possessed by another and produces a tendency to obtain the desired objects (Lazarus, 1991; Parrott, 1991 ), thereby encouraging individuals to pursue limited important resources. Guilt is produced by the belief that one has engaged in a moral transgression that has harmed another (especially a perceived in-group member) and elicits an inclination toward reconciliatory behavior (Lazarus, 1991 ). Like pity, guilt may also be important to the maintenance of reciprocal relations: Guilt may motivate the wrongdoer to compensate for the harm caused and to follow appropriate rules of reciprocal exchange in the future (Trivers, 1971). From Group-Relevant Threats to Discrete Emotions The more basic, ?lower brain? emotions did not evolve for the purpose of helping humans manage the threats and opportunities of sociality. Although one must be wary of attributing emotional states to other animals, fear, anger, and disgust, for example, appear to exist in creatures with an evolutionary history much longer than humans' and in species that are barely social (e.g., Izard, 1978; ?hman, 1993; Rozin, Haidt, & McCauley, 1993 ). Evolution, however, often exploits existing adaptations for other purposes. For example, the infant attachment system may have been co-opted by natural selection to encourage romantic attachment between mates and thus enhance the survival and success of offspring (Shaver, Hazan, & Bradshaw, 1988 ). Because humans have long been ultrasocial, these valuable emotion-based psychological mechanisms likely became used by natural selection for the additional purpose of helping people protect valuable group resources and maintain the integrity of critical social structures and processes. Just as the theft of an individual's property will evoke anger, so too should the theft of a group's property?particularly among those group members highly invested in and dependent on the group. Other emotions, in contrast, may have indeed evolved to help social animals manage the complexities of the repeated, relatively stable interdependence that characterizes social life. For instance, unlike fear, anger, and disgust, the emotions of pity, guilt, empathy, embarrassment, and shame are inherently social and have as cognitive antecedents relatively complex appraisals that explicitly involve actual, imagined, or implied others (e.g., Lewis, 1993 ). Although these emotions likely evolved in the service of managing dyadic social relations, they too may have been easily exploited by natural selection for the additional purpose of managing group and intergroup relations. Because human sociality developed to help individuals gain important tangible resources (e.g., food, shelter, mates), we expect individuals to be most attuned to threats to in-group success when there are tangible outcomes at stake. These emotion-based psychological systems should therefore operate most powerfully within interactions between groups perceived to be mutually interdependent, that is, cooperating or competing to obtain valued tangible outcomes (e.g., as in interactions between White and Black Americans). These threat-emotion systems may operate less prominently within interactions between groups defined primarily by divergent identities alone (e.g., interactions between Honda and Toyota owners). Integrating, then, the emotions research summarized in Table 1 and our understanding of the fundamental structures and processes underlying effective group operation, we have generated explicit predictions regarding the links between specific threats to the effective functioning of groups (and the more general classes of threat they represent) and the specific emotions they evoke; we present the predictions emerging from this threat-based appraisal framework in Table 2. Table 2 Hypothesized Theoretical Connections Between Perceived Threats to the In-Group and Elicited Primary and Secondary Emotions Anger is elicited when people confront obstacles and barriers to their desired outcomes, suggesting that intergroup anger is likely to occur when an out-group is seen to gain in-group economic resources (e.g., jobs), seize or damage in-group physical property (e.g., homes), diminish the freedoms and rights provided to in-group members, choose not to fulfill reciprocal relations with the in-group, interfere with established in-group norms and social coordination, or betray the in-group's trust. As indicated in Table 2 , this anger may then spur individuals to engage in functionally appropriate aggressive behaviors aimed at removing the specific perceived obstacle. Moreover, because all intergroup threats, in the most basic sense, obstruct a desired outcome (e.g., physical safety, good health, rewarding reciprocal relations), we hypothesize that anger may be a secondary emotional reaction to an out-group perceived to carry a contagious physical illness, promote values opposing those of the in-group, endanger the in-group's physical safety, neglect a reciprocity-based relationship because of inability, or threaten the in-group's moral standing. Whether immediate or subsequent, then, we suggest that anger will accompany nearly all perceptions of out-group threat (Neuberg & Cottrell, 2002). Disgust is elicited when people encounter a physical or moral contaminant, suggesting that intergroup disgust is likely to occur when an out-group is thought to carry a contagious and harmful physical illness or when an out-group promotes values and ideals that oppose those of the in-group. This disgust may then motivate qualitatively distinct actions aimed at minimizing the physical or moral contamination. Because threats to personal freedoms and reciprocity relations (by choice) imply that an out-group may promote values that oppose those of the in-group, we hypothesize that disgust may be a secondary emotional reaction to an out-group seen to intentionally limit the in-group's personal freedoms or violate the rules of reciprocal exchange. Fear (and its associated tendencies toward self-protective behavior) should predominate when others are perceived to threaten the group's physical safety. We furthermore hypothesize that fear may be a secondary emotional reaction to an out-group perceived to obtain in-group economic resources, seize or damage in-group property, interfere with in-group social coordination, or betray trust relations with the in-group, because each of these obstacle threats signals potential uncertainty for future well-being. Because physical and moral contamination may also heighten insecurity about the future well-being of in-group members (especially susceptible individuals), fear may also be elicited secondarily by perceived threats to group health or group values. Pity should predominate when others, particularly those potentially existing within an extended in-group, are distressed because they are unable to maintain a reciprocity-based relationship for reasons outside their control (i.e., inability); this may impel prosocial behavior focused on increasing the likelihood that others may be able to meet reciprocity-based obligations in the future. In addition, pity may occur as a secondary emotional reaction to a perceived threat to group health if the diseased others are not held responsible for contracting or passing along their affliction (e.g., Dijker, Kok, & Koomen, 1996; Weiner, Perry, & Magnusson, 1988). Guilt should predominate when an out-group, suffering because of actions of the perceiver's group, is believed to threaten the moral standing of the perceiver's group. After committing such image-damaging moral transgressions, individuals may then behave in ways to validate the in-group's position as good and moral (e.g., Branscombe, Doosje, & McGarty, 2002; Lickel, Schmader, & Barquissau, 2004 ). Finally, envy should occur as a secondary emotional reaction to others who acquire the in-group's economic resources, because these others now possess a desirable object or opportunity that the in-group lacks. Hypotheses From the above considerations we have derived five general hypotheses: Hypothesis 1: Different groups can evoke qualitatively different profiles of emotional reactions. To the extent that different groups can be seen to pose different patterns of threats?see below?they should evoke different profiles of emotional reactions.3 Hypothesis 2: Measures of prejudice as traditionally conceived will often mask the variation across groups in evoked emotion profiles. Because of its conceptualization as a general attitude or evaluation, the traditional measurement of prejudice can obscure the qualitatively distinct emotional responses people have to different groups. This hypothesis will be supported if different groups elicit similar levels of general prejudice but distinct emotion profiles. Hypothesis 3: Different groups can evoke qualitatively different profiles of perceived threats. Different groups may be perceived to threaten group-level resources and group integrity in different, and multiple, ways: Some may seize our territory and advocate values and principles incompatible with those we cherish; others may carry infectious diseases and fail to contribute their share to the common good. Such groups should elicit distinct threat profiles. Hypothesis 4: General measures of perceived threat will often mask the variation across groups in evoked threat profiles. Just as general measures of prejudice may obscure differentiated emotional reactions to groups, general measures of perceived threat may conceal differentiated threats ostensibly posed by different groups. This hypothesis will be supported if different groups elicit similar levels of general threat but distinct threat profiles. Hypothesis 5: Profiles of the specific threats posed by different groups will reliably and systematically predict the emotion profiles evoked by these groups. If our analysis is correct, profiles of emotional reactions should emerge naturally from profiles of threat perceptions, as articulated in Table 2. This hypothesis will be supported if we can demonstrate a systematic link between the observed threat and emotion profiles. Other Contemporary Emotion- and Threat-Based Approaches to Prejudice We are not alone in recognizing the importance of moving beyond the traditional view of prejudice as a general attitude (for a review, see Mackie & Smith, 2002). Moreover, others have explicitly explored the concept of intergroup threat to tangible resources (e.g., LeVine & Campbell, 1972; Sherif, 1966; Stephan & Renfro, 2002 ). We briefly review these alternative approaches to clarify important points of overlap with our sociofunctional approach as well as to highlight some of the unique contributions made by the current research. Esses and her colleagues (Esses & Dovidio, 2002; Esses, Haddock, & Zanna, 1993; Haddock, Zanna, & Esses, 1993 ) have assessed the discrete emotional reactions (e.g., fear, anger, disgust), stereotypes (e.g., friendly, lazy), symbolic beliefs (e.g., ?promote religious values,? ?block family values?), and general attitudes (i.e., prejudice) associated with assorted ethnic and social groups (e.g., French Canadians, Blacks, homosexuals). To explore the associations among these constructs for each group, these researchers combined the valence and frequency of each reaction to create a single, aggregate indicator for each construct. Although an appropriate strategy given their theoretical interests, such aggregations precluded the possibility of assessing within their samples whether prejudice (as a general attitude) obscured the presence of differing emotion profiles for their different target groups and whether aggregated symbolic beliefs (constituting, perhaps, one form of threat) obscured the presence of differing symbolic threat profiles for their different target groups. Thus, although their data are potentially useful for exploring Hypotheses 1 and 2, in particular, and Hypotheses 3?5 to a substantially lesser extent, their analyses do not provide such tests. In an examination of prejudice against ethnic out-groups, Dijker and his colleagues (Dijker, 1987; Dijker, Koomen, van den Heuvel, & Frijda, 1996 ) assessed the emotional reactions native Dutch people experience toward different ethnic minorities (e.g., Surinamese, Turkish, and Moroccan immigrants). They, too, aggregated over discrete emotions to create, on the basis of exploratory factor analyses, four affect categories (i.e., positive mood, anxiety, irritation, concern). Despite this partial aggregation?and the difficulty it causes for rigorously testing Hypothesis 1?their findings nonetheless suggest the importance of considering specific emotions when exploring intergroup affect (e.g., Surinamese, but not Turks or Moroccans, evoked anxiety). Moreover, their data also suggest that certain threats may be more strongly associated with some emotional responses than others (e.g., the perception of danger was associated with anxiety more often than with irritation or worry), a finding consistent with Hypothesis 5. Thus, although far from a systematic and thorough test of our hypotheses, Dijker and colleagues' findings do lend them some support. The stereotype content model (Fiske et al., 2002; Fiske, Xu, Cuddy, & Glick, 1999 ) posits that people experience distinct emotions toward groups perceived to differ on the dimensions of warmth and competence?pity toward high-warmth but low-competence groups, envy toward low-warmth but high-competence groups, admiration toward high-warmth and high-competence groups, and contempt toward low-warmth and low-competence groups. With respect to numerous ethnic, political, religious, and social groups within America, these researchers did indeed observe the predicted differentiated emotional reactions to groups, consistent with Hypothesis 1. We note, however, that (a) their four emotion clusters aggregate across emotions typically believed to be discrete (e.g., anger and disgust are both in the cluster labeled ?contempt?), (b) other fundamental emotions (for example, fear) were never analyzed because they failed to fit cleanly into one of these four empirically driven clusters, and (c) the categorical nature of their framework (and accompanying analysis strategy) does not suggest the conceptual possibility that different groups elicit multiple emotions in different configurations (i.e., emotion profiles). As a consequence, the findings from this approach likely underestimate the diversity of emotional reactions people have to different groups; we present evidence suggesting this very point below. Moreover, the aims of these researchers were different than ours, and so we are not able to use their data to test our Hypotheses 2?5. Intergroup emotions theory (IET; Devos, Silver, Mackie, & Smith, 2002; E. R. Smith, 1993, 1999; Mackie et al., 2000 ) arises from the melding of social identity and self-categorization theories, on the one hand, with appraisal theories of emotions, on the other. As with our approach, IET posits that people experience a diversity of discrete intergroup emotions toward different groups. In particular, when social identities are salient, individuals interpret situations in terms of harm or benefit for one's own group and experience specific emotions as suggested by assorted appraisal theories of emotion (they cite Frijda, 1986; Roseman, 1984; Scherer, 1988; C. A. Smith & Ellsworth, 1985 ). The predictions generated from IET will overlap with the predictions derived from our own framework to the extent that it uses a similar, functionally grounded theory of discrete emotions (which it appears to do) and a similar threat-based appraisal system (which is unclear); indeed, we suspect that the five hypotheses proposed here would be seen by IET proponents as consistent with that approach. Empirically, however, E. R. Smith, Mackie, and their colleagues (Devos, Silver, Mackie, & Smith, 2002; E. R. Smith, 1993, 1999; Mackie et al., 2000 ) have limited their explorations to the emotions of anger and fear, within the context of having experimental participants imagine interacting with groups designed to differ in the strength of threat they posed to participant in-groups (e.g., individuals valuing social order vs. freedom; fellow students at one's university). To this point, then, the data generated by IET researchers do not test our Hypotheses 1?4 and provide only a partial test of Hypothesis 5. According to image theory (M. G. Alexander, Brewer, & Herrmann, 1999; Brewer & Alexander, 2002 ), specific configurations of appraisals on the dimensions of intergroup competition, power, and status give rise to differentiated emotional reactions (e.g., anger, fear, envy), cognitive images (e.g., out-group as enemy, barbarian, or imperialist), and action tendencies (e.g., attack, defend, rebel). This perspective is compatible with ours in its aim to link specific threats to specific emotions, although image theory focuses more on the sociostructural relations from which different threats and opportunities emerge, whereas we focus more on particular threats and opportunities per se. Recent empirical work examining relations among White and Black American high school students supports the image theory notion that differentiated emotional reactions are indeed associated with different out-group images (Brewer & Alexander, 2002 ). The findings of these researchers are thus compatible with our Hypotheses 1, 3, and 5, although we note that their categorical scheme, like that of stereotype content theory, does not straightforwardly account for the possibility that different groups elicit multiple emotions in different configurations (i.e., that they may elicit different emotion profiles). Finally, the revised integrated threat theory (Stephan & Renfro, 2002 ) emphasizes the importance of threat for understanding prejudice. Revised integrated threat theory posits that four umbrella categories of constructs?realistic threats to the in-group, symbolic threats to the in-group, realistic threats to the individual, and symbolic threats to the individual?cause negative psychological (e.g., prejudice) and behavioral (e.g., aggression) reactions to groups thought to pose such threats. This perspective focuses on a relatively small number of tangible threats, however, and like realistic conflict theories before it (e.g., LeVine & Campbell, 1972; Sherif, 1966 ) makes no claims as to how different specific threats would elicit distinct, specific emotions. Thus, the data generated by this approach are potentially relevant only to our Hypothesis 3. Thus, although there exist clear points of convergence between our sociofunctional approach and these other perspectives, the points of divergence are also significant; we further compare the alternative approaches below. Moreover, note that none of the empirical work emerging from these approaches has explicitly tested Hypotheses 2 and 4?that general measures of prejudice and threat may actually mask across-group differences in emotion and threat profiles?or has tested Hypotheses 3 and 5 in a comprehensive manner. To test our hypotheses and to provide a uniquely rich data set useful for beginning the process of empirically differentiating among approaches, we presented participants with an assortment of ethnic, religious, and ideological groups within the United States and inquired about (a) the specific emotional reactions they have toward these groups, (b) the general feeling (i.e., prejudice) they have toward these groups, (c) the specific threats they perceive these groups as posing, and (d) the general threat they perceive these groups as posing. We predicted that different groups would elicit different profiles of discrete emotions and threats (Hypotheses 1 and 3); that differentiations among these emotion and threat profiles would often be effectively masked by simple valence-based measures of prejudice and threat (Hypotheses 2 and 4); and that there would be systematic, functional links between specific threats and specific emotions, as articulated in Table 2 (Hypothesis 5). Two hundred thirty-five European American undergraduate students participated. They were, on average, 20.60 years old (SD = 3.53), predominantly female (63%), and self-identified as mainstream Christian (51%). The majority (64%) were recruited from upper division psychology classes and received extra credit in exchange for their participation. The remainder were recruited from the introductory psychology subject pool and received required course credit in exchange for their participation. Participants from upper division psychology courses completed the questionnaire packets out of the classroom, on their own time. Questionnaire packets were distributed to the introductory psychology participants in small groups in the laboratory; they completed the items at their own pace. Presentation of the affective response and threat perception items for each group was counterbalanced across all participants. Presented in one of 10 random orders, participants rated a set of nine groups: activist feminists, African Americans, Asian Americans, European Americans, fundamentalist Christians, gay men, Mexican Americans, Native Americans, and nonfundamentalist Christians. Because we expected few threats and little threat-related emotion to be associated with one's own groups, the participants' ethnic in-group (European Americans) and modal religious in-group (nonfundamentalist Christians) were included to serve as baselines for comparison with the other groups. We selected the additional target groups because (a) our European American participants in the American Southwest likely perceive themselves to be involved with these groups in mutually interdependent relationships involving tangible outcomes, and (b) common stereotypes suggest that these groups might be seen to pose a range of different threats?a requirement if we were to appropriately test our hypotheses. To wit, we suspected that activist feminists, fundamentalist Christians, and gay men would be seen as threatening the values and personal freedoms of our student sample and in somewhat different ways; that gay men would be seen as posing a threat to health (via a perceived association with HIV/AIDS); that Asian Americans would be seen as posing an economic threat; that African Americans and Mexican Americans would be viewed as posing physical safety, property, and reciprocity (by choice and inability) threats; and that Native Americans would be viewed as posing threats to reciprocity (by inability). Note that the test of our hypotheses does not depend on whether we are correct in the above presumptions of which groups are associated with particular threats. Indeed, we could be entirely wrong in the threats we expect each group to pose but receive perfect support for our hypotheses?if the emotions elicited by a group are those that map as predicted onto the threats that group is actually perceived by our participants to pose. However, we were confident?on the basis of past research (e.g., Cottrell, Neuberg, & Asher, 2004; Devine & Elliot, 1995; Esses & Dovidio, 2002; Haddock et al., 1993; Haddock & Zanna, 1994; Hurh & Kim, 1989; Yee, 1992 )?that the collection of groups selected would provide enough variation in perceived threats to enable an adequate test of our hypotheses. To assess affective responses to the selected groups, participants reported the extent to which they experienced each feeling when thinking about a particular group and its members (1 = Not at all, 9 = Extremely ). To assess overall positive evaluation, participants reported the degree to which they liked and felt positive toward each group; to assess overall negative evaluation, participants reported the extent to which they disliked and felt negative toward each group. In addition, we measured 13 emotional reactions with two items each. Some of these emotions were selected because of their straightforward relevance to our theory (see Table 2 )?anger, disgust, fear, pity, envy, and guilt?or because they were longer lasting but less intense instantiations of these (i.e., resentment, anxiety). Others were included merely to provide participants with a broader emotional judgment context (i.e., respect, happiness, hurt, sadness, pride, security, and sympathy). All participants completed these affective response items in the same random order for all groups. Threat Perceptions To assess perceived threats associated with the selected groups, participants indicated the extent to which they agreed with statements regarding the general and specific threats that each group poses to American citizens and society (1 = Strongly Disagree, 9 = Strongly Agree ). To assess general threat, participants reported the extent to which each group was dangerous and posed a threat to American citizens. To assess specific threats relevant to our sociofunctional approach (see Table 2 ), participants reported the extent to which they believed the target group threatened jobs and economic opportunities, threatened personal possessions, threatened personal rights and freedoms, violated reciprocity relations by choice, threatened social coordination and functioning, violated trust, threatened physical health, held values inconsistent with those of the in-group, endangered physical safety, and violated reciprocity relations because of a lack of ability.4 Two items were included to measure each of these 10 threats. All participants completed the 2 general threat items followed by the 20 specific threat items in a random arrangement. Composite Scores and Difference Scores As described, all participants completed two items designed to assess each emotion and threat construct. These a priori item pairs correlated highly with one another (all r s > .70), and so we averaged them to create composite scores for each general and specific affective response and for each general and specific threat perceived. Although it is not uncommon for researchers to further aggregate such data on the basis of exploratory factor analyses, we have chosen not to do so on technical and theoretical grounds. Technically, because exploratory factor analysis is a data-driven approach, it runs the risk of capitalizing on chance characteristics in the data and creating unstable and incoherent factor solutions (Conway & Huffcutt, 2003; Fabrigar, Wegener, MacCallum, & Strahan, 1999 ). Theoretically, we believe that the individual threat measures?though correlated with one other?assess distinct categories of threat: Stealing a person's car is not the same as making the person ill or assaulting him or her with a weapon. On similar grounds, as many emotions researchers have emphasized, it is necessary to maintain firm empirical distinctions among our measured emotions: Feeling angry is not the same as feeling disgusted or feeling afraid. Indeed, growing evidence demonstrates that unique universal signals, nervous system responses, and antecedent events differentiate the basic emotions (e.g., anger, disgust, fear; Ekman, 1999 ). This decision to maintain firm distinctions among our threat and emotion constructs is supported by confirmatory factor analyses (CFAs).5 Moreover, if we are incorrect in our belief that these threats and emotions are distinct from one another?if, for example, anger, disgust, and fear functioned identically for our participants?then the predicted textured patterning of perceived threats and emotional patterns would not emerge, and our hypotheses would be disconfirmed. As noted above, our focus is on the potential patterning of threat-related emotions. These reactions better describe the intergroup interactions of interest, and focusing our report on them greatly streamlines the presentation of a large amount of data. We thus created emotion composite scores for the emotion constructs most relevant to our theoretical approach: anger/resentment, disgust, fear/anxiety, pity, and envy.6 To create a measure of overall negative prejudice, we subtracted the positive evaluation composite score for each group from the negative evaluation composite score for that group; higher values on this overall prejudice measure indicate more negative prejudice toward the group. To test Hypotheses 1?4, we used each participant's affect and threat ratings of European Americans as a baseline for comparison against their ratings of the other groups. Thus, we created and analyzed difference scores for each affect and threat by subtracting each participant's affect and threat rating for European Americans from his or her affect and threat rating for each other group. The ratings reported below thus reflect mean difference scores (relative to European Americans) for all participants in our sample. Because all participants were European American, this approach serves to eliminate idiosyncratic differences in participants' tendencies to perceive particular threats and to experience particular emotions and greatly aids with the visual identification and interpretation of affect and threat patterns. Note that our conclusions regarding Hypotheses 1?4 remain unchanged if we instead analyze raw (i.e., nondifference) scores. Tests of Hypotheses Hypothesis 1: Different Groups Can Evoke Qualitatively Different Profiles of Emotional Reactions We conducted a two-way (Target Group ? Emotion Experienced) repeated-measures analysis of variance (ANOVA) on the mean difference emotion ratings; a significant Target Group ? Emotion Experienced interaction would reveal that the emotion profiles do indeed differ across groups. As predicted, this interaction emerged as highly statistically significant, F(28, 6384) = 31.03, p < .00001, partial [eta]2 = .120; Table 3 presents the means and standard deviations for all emotion ratings for all groups. These data provide substantial support for Hypothesis 1. People may indeed report different patterns of emotional experience toward different groups. For the purpose of more clearly illustrating the diversity of emotional response to groups, we highlight participants' affective reactions to two subsets of groups in Figure 1 (African Americans, Asian Americans, and Native Americans) and Figure 2 (activist feminists, fundamentalist Christians, and gay men). Figure 1. Participants' mean affective reactions to African Americans, Asian Americans, and Native Americans, relative to affective reactions to European Americans. A repeated-measures analysis of variance on the emotion ratings for these three groups revealed a significant Target Group ? Emotion Experienced interaction, F(8, 1824) = 50.63, p < .00001, partial [eta]2 = .182, supporting Hypothesis 1: Participants reported different patterns of emotional reactions to these different groups. Figure 2. Participants' mean affective reactions to activist feminists, fundamentalist Christians, and gay men, relative to affective reactions to European Americans. A significant Target Group ? Emotion Experienced interaction, F(8, 1824) = 15.98, p < .00001, partial [eta]2 = .065, emerged in a repeated-measures analysis of variance on the emotion ratings for these three groups, indicating that participants experienced different patterns of emotional reactions to them. Table 3 Means and Standard Deviations of Emotional Reactions (Relative to European Americans) Hypothesis 2: Measures of Prejudice as Traditionally Conceived Will Often Mask the Variation Across Groups in Evoked Emotion Profiles We have just seen that different groups can evoke different patterns of discrete emotions. Hypothesis 2 would be supported if groups that elicit distinct emotion profiles nonetheless elicit similar levels of general prejudice. Such a finding would illustrate that prejudice can mask meaningful patterns of underlying emotions. Indeed, as seen in Table 3 , many groups that differed from one another in the emotion profiles they evoked also evoked comparable degrees of general prejudice. We illustrate this general pattern with the two subsets of groups presented in Figures 1 and 2. As presented in Figure 1 , African Americans, Asian Americans, and Native Americans differed significantly in the emotion profiles they elicited in our participants. Moreover, they each evoked general negative prejudice: Prejudice difference score for African Americans = 1.14, t(228) = 4.74, p < .001; for Asian Americans, difference = 0.89, t(228) = 4.06, p < .001; and for Native Americans, difference = 0.76, t(228) = 3.32, p < .001. Finally, supporting Hypothesis 2, the prejudice ratings for these three groups did not significantly differ from one another, F(2, 456) = 1.42, p = .24, [eta]2 = .006. Thus, although our participants expressed similar overall negativity toward African Americans, Asian Americans, and Native Americans, they nonetheless reported different discrete emotional reactions toward them. This strongly suggests that measures of general prejudice can indeed mask a rich diversity of discrete emotional reactions. As presented in Figure 2 , activist feminists, fundamentalist Christians, and gay men also differed significantly in the patterns of discrete emotions they elicited in our participants. Moreover, they all elicited substantial amounts of negative prejudice: Prejudice difference scores for feminists = 3.38, t(228) = 11.20, p < .001; for fundamentalist Christians, difference = 3.37, t(228) = 10.51, p < .001; and for gay men, difference = 2.78, t(228) = 8.75, p < .001. Yet here again, the prejudice ratings for these three groups did not differ from one another, F(2, 456) = 1.47, p = .231, [eta]2 = .006. This pattern, too, illustrates that measures of overall prejudice can mask a notable diversity of discrete emotional reactions. Hypothesis 3: Different Groups Can Evoke Qualitatively Different Profiles of Perceived Threats We performed a two-way (Target Group ? Threat Perceived) repeated-measures ANOVA on the mean difference threat ratings; a significant Target Group ? Threat Perceived interaction would reveal that different groups can indeed be viewed as posing different profiles of threat. As predicted, this interaction emerged as a significant effect, F(63, 14427) = 46.15, p < .00001, partial [eta]2 = .168; Table 4 presents the means and standard deviations for all threat ratings for all groups. These patterns of perceived threats provide substantial support for Hypothesis 3: People may indeed perceive different patterns of specific threats from different groups. For the purpose of more clearly illustrating this effect, we present in Figure 3 the patterns of threats people perceived from activist feminists, African Americans, and fundamentalist Christians. Figure 3. Participants' mean threat perceptions for activist feminists, African Americans, and fundamentalist Christians, relative to threat perceptions for European Americans. A repeated-measures analysis of variance on the threat ratings for these three groups revealed a significant Target Group ? Threat Perceived interaction, F(18, 4122) = 30.05, p < .00001, partial [eta]2 = .116, indicating that participants perceived different patterns of threat from these groups and thus illustrating support for Hypothesis 3. Reciprocity (Choice) = nonreciprocity by choice; Reciprocity (Inability) = nonreciprocity by inability. [Graphic] [Help with image viewing] [Email Jumpstart To Image] Table 4 Means and Standard Deviations of Threat Perceptions (Relative to European Americans) Hypothesis 4: General Measures of Threat Will Often Mask the Variation Across Groups in Evoked Threat Profiles Our participants often believed that different groups threatened America in different ways. Hypothesis 4 would be supported if groups that evoked distinct threat profiles nonetheless evoked similar levels of general threat. Indeed, as seen in Table 4 , many groups that differed from one another in the profiles of specific threats ostensibly posed also evoked similar perceptions of general threat. We illustrate this general pattern with the subset of groups presented in Figure 3. As presented in Figure 3 , our participants viewed African Americans, activist feminists, and fundamentalist Christians as posing significantly different profiles of threat. Moreover, these groups are all viewed as generally threatening?the scores all differ from the European American baseline. For the general threat posed by African Americans, difference = 0.87, t(229) = 7.27, p < .001; for activist feminists, difference = 0.76, t(229) = 5.87, p < .001; and for fundamentalist Christians, difference = 0.85, t(229) = 5.85, p < .001. Finally, supporting Hypothesis 4, the general threat ratings for these groups do not differ from one another, F(2, 458) = 0.24, p = .789, [eta]2 = .001. Thus, just as a focus on general prejudice can mask an interesting and rich diversity of functionally important emotions evoked by groups, a focus on general threat can mask an interesting and rich diversity of specific threats the groups are seen as posing. We have seen, then, strong support for Hypotheses 1?4. In addition, we note that Cottrell, Neuberg, and Asher (2004) used nearly identical procedures and measures in three additional samples. These other studies demonstrate patterns of threat perceptions and affective reactions strikingly similar to the ones we reported here and thus strongly corroborate our findings.7 Hypothesis 5: Profiles of the Specific Threats Posed by Different Groups Will Reliably and Systematically Predict the Emotion Profiles Evoked by These Groups If intergroup emotion indeed represents a functional response to intergroup threat, then we should observe the hypothesized threat-emotion links articulated in Table 2 . We explored these hypothesized connections using two essentially independent tests?one based on correlations among the measures, the other based on means of the measures. Multiple regression approach To predict each discrete emotion from the 10 specific threats, controlling for the influence of the other threats, we pursued a multiple regression strategy. The intercorrelations among specific threats and between all threats and all emotions were substantial, however, leading to special statistical problems (e.g., multicollinearity, suppression) and rendering findings from these models hard to interpret. We thus used instead the threat classes articulated in Table 2 . Specifically, we averaged the 6 threats from the ?obstacles? category (i.e., threats to economic resources, property, personal freedoms, reciprocity [by choice], social coordination, and trust), and the 2 threats from the ?contamination? category (threats to group health and values). The 2 remaining threats?of physical danger and nonreciprocity because of inability?were represented as before.8 We examined threat-emotion relations across target groups. Recall that participants rated all nine groups on threat perceptions and emotional reactions. To avoid complex technical issues related to nonindependence of data, each participant was randomly assigned to provide threat and emotion ratings for only one of the target groups, thereby yielding approximately equal numbers of entries for each group. This random sample of the complete data set thus contained information on all four threat categories and all five discrete emotions across the nine target groups; this enabled us to perform five regression analyses, each predicting one emotion from the four threat categories, thereby allowing us to assess the independent predictive ability of each threat for each emotion. Because a huge number of different subsamples could be randomly drawn from the complete sample, we conducted these analyses on 50 randomly selected subsamples to reduce the likelihood of drawing conclusions from data patterns idiosyncratic to particular chance samplings. Though not identical, this strategy is somewhat similar to bootstrapping and resampling procedures. In Table 5 , we present the mean standardized regression coefficients, averaged across the 50 random subsamples, for each threat category in the regression of each emotion. Note that the general pattern of regression coefficients provides yet another demonstration of the problem associated with conceiving intergroup affect and threat as unidimensional constructs: Different intergroup emotions are predominantly associated with different classes of threat. We turn now to the regression analyses for each emotion, in turn. Table 5 Regressions of Each Emotion on Threat Categories In line with the hypothesized theoretical connections articulated in Table 2, we expected anger to be independently predicted by obstacle threats; this was clearly the case (average [beta] = .58, p < .001). We also hypothesized that anger might be a secondary emotional reaction to threats to group health and group values; the contamination category did indeed predict anger (average [beta] = .11, p < .001). We also speculated that anger might be secondarily associated with threats to physical safety and reciprocity (because of lack of ability); these speculations were not supported. Second, we expected that disgust would be independently predicted by contamination; this hypothesis, too, was strongly supported (average [beta] = .35, p < .001). We also thought that two of the obstacle threats in particular (i.e., to personal freedoms and reciprocity relations) might secondarily predict disgust; although obstacle threat as a general class did independently predict disgust (average [beta] = .36, p < .001), the outcomes of our more specific speculations were clearly mixed (see Table 6). Table 6 Regressions to Test Exploratory, Secondary Predictions Third, we expected that fear would be independently predicted by physical safety threat. This hypothesis was strongly supported (average [beta] = .37, p < .001), as was our general secondary prediction that obstacle threats might also independently predict fear (average [beta] = .30, p < .001). A perusal of Table 6, however, reveals that the success of our specific secondary predictions regarding specific obstacles was mixed. Fourth, we expected that pity would be independently predicted by the inability to reciprocate, and this was clearly the case (average [beta] = .17, p < .001). Our lone secondary hypothesis?that pity would also be independently associated with the possibility of disease contamination?was supported as well: Contamination in general predicted pity (average [beta] = .20, p < .001); however, this was due to both the disease and the values components of the contamination aggregate (see Table 6). Finally, we expected that envy would be independently predicted by the obstacle of economic threat. Consistent with this, envy was predicted by obstacle threat in the aggregate (average [beta] = .18, p < .001). Moreover, a perusal of Table 6 reveals that this obstacles-envy link was indeed driven largely by economic threat in particular. In sum, our primary predictions regarding the functional links between threat classes and their affiliated emotions find strong support in these data: Obstacle threat emerged as an independent predictor of anger, contamination threat emerged as an independent predictor of disgust, physical safety threat emerged as an independent predictor of fear, and reciprocity threat because of inability emerged as an independent predictor of pity. In addition, many of our secondary predictions were borne out as well. Indeed, taking stock of the 20 entries in Table 5 , we see that (a) all four of our primary hypotheses (bolded entries) were supported and (b) five of our eight secondary hypotheses (italicized entries) were supported. As further support of our hypotheses, we note that no threat class expected to show a secondary association with an emotion emerged as a better independent predictor than the threat class expected to show a primary association with that emotion. Though our accuracy in predicting null findings (entries in conventional font) may appear less than ideal, these mean regression coefficients are numerically rather small and, in fact, never exceed a coefficient whose significance is expected as the result of a primary or secondary prediction. This overall success rate can be contrasted with the straightforward alternative that there is no specificity of links between threat classes and emotions, operationalized such that no threat classes independently predict specific emotions or that all threat classes independently and equivalently predict all emotions?neither of which received empirical support from our data. Hypothesis 5 addresses the crux of our theoretical arguments?the notion that specific threats elicit functionally focused emotions. To fully appreciate intergroup emotions, then, the focus should be on specific threat perceptions rather than on the particular group thought to pose a threat. In this sense, the nine target groups considered in this research are secondary in interest to the threats associated with each group. Threats, as compared with target group, should be better predictors of emotions. This, of course, is an empirical question: How well do target groups per se predict emotional response after controlling for the four threat classes? To better gauge the size of this effect, we dummy coded the target groups and compared the proportions of variation in each emotional reaction explained by four effects: effect of threats, effect of threats controlling for target group, effect of target group, and effect of target group controlling for threats. In Table 7, we present the mean [DELTA]R2 values, averaged across the 50 random subsamples for each of these effects in the regression of each emotion. First, we note that the threat classes, as a set, account for a substantial amount of variation in each emotion (especially in the cases of anger, disgust, and fear). Moreover, this effect remains sizable after controlling for the target group being rated. In comparison, target group tends to account for a much smaller, though still significant, portion of variation in emotional response. Crucial to our theoretical arguments, this effect significantly decreases even further after controlling for the threat classes. Threat perceptions therefore appear (at least) to partially mediate the observed group differences in emotional reactions. In theory, we would have hoped for complete mediation. Of course, even if complete mediation by threats exists, it would be difficult to uncover in this investigation because (a) we have not included in our analyses all threat perceptions relevant to emotional reactions (we do not claim to be providing a veritable census of threats); (b) for statistical stability reasons given our sample size, we only estimated main effects of threat perceptions on emotional reactions, thereby not including any of the ways in which the many possible interactions among our 10 threats might account for apparent target group effects; and (c) none of our threat perceptions and emotional reactions were measured perfectly, without error. It is thus the case that the unique effects of target group on emotions, as small as they are, actually overestimate their true sizes. Using the multiple regression approach, then, we see strong support for Hypothesis 5.9 Table 7 Regressions to Compare Effects of Group Type and Threat Perceptions Cluster analytic approach Hypothesis 5 posits, generally, that emotion profiles map onto threat profiles. In addition to the multiple regression approach, then, one can alternatively test this hypothesis by assessing the extent to which groups seen to pose similar patterns of threat also evoke similar patterns of emotion. To the extent they do not, support for Hypothesis 5 would be weakened. Cluster analysis (Hair, Anderson, Tatham, & Black, 1992 ) is a ?technique for grouping individuals or objects into clusters so that objects in the same cluster are more like each other than they are like objects in other clusters? (p. 265), and recent uses of this analysis (Fiske et al., 2002; Lickel et al., 2000 ) have indeed proven valuable in identifying clusters of groups and their common characteristics. As a multivariate technique, cluster analysis is especially well suited for our purposes, because it can calculate similarities and differences among mean profiles of multiple threat or emotion ratings. One convenient technical implication of this is that cluster analysis is not susceptible to issues of multicollinearity and suppression, both of which complicated our attempt to perform simple multiple regression analyses using the 10 specific threats. A second implication is that it essentially provides an independent test of the hypothesis using the same data set. We began by averaging the participants' ratings of each of the nine groups for each of the 10 specific threats and each of the five discrete emotions. Though we could cluster analyze threat and emotion scores representing differences relative to the threat and emotion scores for European Americans (as we did when testing Hypotheses 1?4, for the reasons discussed above), we chose to average and analyze original threat and emotion ratings for all nine groups, thereby including European Americans as a group in the cluster analyses. We expected that the participants' in-groups (that is, European Americans and nonfundamentalist Christians) might form a single cluster, with threat and emotion profiles differing from those of the other clusters. The use of original threat and emotion ratings for European Americans, as well as the other groups, allows us to explore this idea and to better examine similarities and differences in the profiles. We note that cluster analyses on the difference scores and cluster analyses on the original scores yield identical results (except for the necessary absence of European Americans from the cluster solutions for difference scores). Following the advice and example presented in Hair et al. (1992) , we used two types of cluster analysis, each serving a different purpose. Hierarchical cluster analysis is particularly useful to determine the optimal number of clusters present in the data, whereas k-means cluster analysis is particularly useful to determine the arrangement of the nine groups within these clusters. Hierarchical cluster analysis operates on a similarity matrix containing similarity indices among the objects being clustered (in this case, the nine target groups), using some set of characteristics of each object (the profiles of 10 threats or five emotions). These similarity measures, which involve no decisions about the appropriate number of clusters within the data, offer an additional straightforward test of Hypothesis 5: To the extent there exists a positive correlation between threat similarity measures and emotion similarity measures, then Hypothesis 5 is further supported. Because it is the most commonly used measure of interobject similarity (Hair et al., 1992 ), we calculated the euclidean distance between each pair of objects two times, once using the threat profiles and once using the emotion profiles. The correlation coefficient between these threat-based distances and emotion-based distances was .41 (p = .013), indicating that groups that are similar on threat profiles are also similar on emotion profiles, supporting Hypothesis 5. We note above that these similarity measures offer no explicit information about the optimal number of clusters in the data or the arrangement of groups into the clusters. As an extension of the demonstrated relationship between the threat and emotion similarity measures, however, we might reasonably expect the specific arrangement of groups into ?threat clusters? to map onto the specific arrangement of groups into ?emotion clusters.? Because our theoretical framework assigns causal priority to perceived threats, we first performed hierarchical cluster analysis (using Ward's method) on the ratings of the specific threats ostensibly posed by the nine target groups. Although decisions about the best fitting number of clusters are inherently subjective, we adhered to conventional decision rules as outlined in Hair et al. (1992), Blashfield and Aldenderfer (1988), and Everitt and Dunn (2001) . The agglomeration schedule of a hierarchical cluster analysis specifies the groups that are merged in each stage of the analysis and provides coefficients that indicate the distances between these newly merged groups. Because a large agglomeration coefficient indicates that two relatively different groups have been combined, typical guidelines suggest selecting the number of clusters prior to a large increase in the agglomeration coefficient. Guided by these decision rules, a five-cluster solution offered the best fit to the threat profile data. We next turned to k -means cluster analysis on the threat ratings to determine how the nine target groups fit into the five clusters. Because differences in the randomly chosen initial cluster centers may alter the final cluster solution (Hair et al., 1992 ), we conducted this analysis multiple times on the same data configured in different arrangements to establish the most stable five-cluster solution, which is presented in the left side of Table 8. Table 8 Five-Cluster Solutions From Threat and Emotion Cluster Analyses Moving to emotional responses, k -means cluster analysis was next performed on the ratings of the discrete emotions participants experienced when considering these same nine groups. As noted above, we give causal precedence to perceived threats. We therefore constrained this analysis of the emotion ratings to a five-cluster solution, because this was the most appropriate solution for the threat ratings. This cluster analysis was also performed multiple times to establish the most stable five-cluster solution, which is presented in the right side of Table 8. As Table 8 clearly shows, there is great overlap between the clusters emerging from the analysis of threat perceptions and the clusters emerging from the analysis of emotional experiences: Groups seen as similar in the patterns of threats they pose were also seen as similar in the patterns of emotions they elicited. Indeed, the only difference between the two cluster analyses involves the movement of Asian Americans: In the threat analysis, Asian Americans clustered with Native Americans (Cluster 4) because of the perception that these two groups both hold values inconsistent with mainstream American values. In the emotions analysis, Asian Americans clustered with European Americans and nonfundamentalist Christians because of the relatively little threat-related affect elicited by these groups. Aside from this single change, however, the two cluster solutions, derived from analyses of different judgments, are strikingly similar. The probability of observing a perfect replication with all nine groups considered in the calculation, merely by chance, is .00006. We adjusted this value to account for the ?defection? by the single group (i.e., Asian Americans); the probability of observing this slightly imperfect match between the two cluster solutions merely by chance remains a very small .0003.10 In sum, groups that clustered together on perceived threat also (nearly perfectly) clustered together on elicited emotions, thereby providing further support for Hypothesis 5. Discussion We derived five general hypotheses from our sociofunctional analysis of intragroup and intergroup relations and tested them by examining European American participants' reactions to a variety of ethnic, religious, and ideologically oriented groups encountered frequently within the United States. As predicted, (a) different groups can evoke different profiles of emotions; (b) prejudice, as traditionally measured, can obscure the rich texture of these emotional experiences; (c) different groups are often believed to pose different profiles of threat to one's in-group; and (d) measures of general threat can mask the rich texture of these threat perceptions. We believe these data are the first to provide straightforward empirical support for Hypotheses 2 and 4. Two sets of analyses also support our fifth hypothesis?that emotional experience arises systematically from threat perception: (a) The perception of particular threats predicted the experience of functionally associated emotions, and (b) groups that elicited similar threat profiles also elicited similar emotion profiles. Although each statistical technique has its own limitations, the cumulative evidence from these analyses offers strong support for Hypothesis 5. Of course, a stronger causal test of Hypothesis 5 is impossible given the correlational nature of our data and participants' preexisting feelings toward and beliefs about the real-world groups we selected. A more rigorous test would require participants to respond to novel or artificial groups about which we could systematically manipulate specific patterns of threats and subsequently measure patterns of emotional response.11 Contributions of the Present Data Our data illustrate quite clearly that the traditional operationalization of prejudice?as a general attitude?can obscure the richness of emotional experience that groups elicit from others: People do not merely experience evaluative valence when encountering members of groups but instead experience discrete emotions. Moreover, as our threat and emotion profiles make clear, groups cannot be simply characterized as posing one particular threat or as eliciting one particular emotion. Rather, groups are seen to pose multiple threats and to elicit a variety of emotions, often in interesting combinations. In all, then, the negative implications of adhering to the traditional view of prejudice may be substantial. Just as emotion profiles varied across groups, so did threat profiles. The current data thus also suggest a complement to the traditional view of stereotype as trait. Specifically, the sociofunctional approach presumes that the most important stereotypical knowledge should be knowledge that is relevant to the threats and opportunities the out-group provides for the in-group. Indeed, we suspect that most stereotypical knowledge can usefully be framed in terms of the stable beliefs about the threats and opportunities groups are seen to pose. That is, particular groups are stereotypically characterized as lazy because they are perceived to contribute less than their fair share, as aggressive because they are perceived to threaten physical safety, and so on. More generally, we should note the uniqueness of the data we report here. In terms of affect, we have gathered data about a wide range of emotional reactions people have toward a variety of groups. Although a few others have assessed such a range of emotions, they have mitigated somewhat the value of doing so by aggregating over them (e.g., Brewer & Alexander, 2002; Dijker, 1987; Esses et al., 1993; Fiske et al., 2002 ). In terms of beliefs, we have begun to document a wide variety of threats that may be stereotypically linked to a variety of groups well known within the United States; to our knowledge, no similar data set exists. Because we have maintained the discrete nature of the assessed emotions and threats, other researchers testing hypotheses of intergroup affect and threat gain access to a useful, rich set of data. Beyond their usefulness for testing our hypotheses, then, these data should also provide researchers with textured descriptive data about how (at least some) people view and feel toward a range of different groups. We acknowledge, of course, that the reactions of our European American university students to specific groups will not correspond perfectly with the reactions of others in different places and at other times. The threats, and resultant emotional reactions, that members of a particular group associate with another group should emerge from the functional relationship between the two groups as well as associated sociohistorical factors (e.g., Brewer & Alexander, 2002; Fiske et al., 2002 ). Thus, the current sample should represent well the emotional and stereotypical content held by other samples only to the extent they share similar functional relationships with the groups we have explored here. To the extent, however, that other samples have different functional relationships with these target groups, they should form different threat profiles and experience a different configuration of emotions. For example, because African American and Mexican American respondents differ in the threats they see European Americans posing to their own groups, they should also differ in their emotional reactions to European Americans?and they do (Cottrell & Neuberg, 2003). Note, however, that such variation across perceiver samples in the specific threat perceptions and feelings evoked by target groups does not imply that these samples will exhibit different mappings between specific threats and specific emotions: Regardless of sample, we expect that particular profiles of emotional experience (e.g., those dominated by fear) will emerge systematically from conceptually relevant profiles of threat (i.e., those dominated by perceived threat to physical safety). In a similar vein, individuals who differ from one another in their inclinations toward particular threat appraisals (e.g., individual differences in perceived vulnerability to disease) should differ from one another in the particular intergroup emotions they typically experience (e.g., disgust; see Schaller, Park, & Faulkner, 2003). A careful look at the emotion profile for Asian Americans reveals a potentially interesting discovery: Our European American participants reported significant general negative prejudice toward Asian Americans but little or no specific threat-related emotions. Across at least four data sets collected by our lab, we have consistently found a similar affect profile for Asian Americans (although slight envy emerges in some samples). This anomaly may be the result of simple and relatively uninteresting causes. In particular, we might surmise that reports of envy toward Asian Americans could appear unjustified or ?unsportsmanlike? in a society that so values meritocracy. That is, Americans may tend to view Asian American successes as deserved achievements and thus may be reluctant to admit to or report feeling envious of them. Alternatively, it may be that the high status accorded to Asian Americans may be identity threatening, leading individuals to experience specific emotions other than those explored here, such as schadenfreude (pleasure in another's misfortune)?an emotion potentially directed toward high-status groups that come upon hard times.12 In all, we are intrigued by the various possibilities and encourage other researchers to explore more deeply specific feelings toward Asian Americans and other higher status groups. Finally, because research findings lend support to the theoretical frameworks that hypothesize their existence, the current data support the usefulness of our broader sociofunctional approach. That these data may also be viewed as consistent with predictions generated from alternative frameworks does not preclude their value for the sociofunctional approach as well; we return to this point below. Related Theoretical Perspectives We overviewed in the introduction other research programs and perspectives that take seriously the potential roles that intergroup emotions and tangible intergroup threats play in characterizing prejudice and intergroup relations. Here, within the context of addressing several important theoretical issues, we briefly highlight some similarities and differences among these alternative, and sometimes complementary, approaches. Specificity of Emotion, Specificity of Threat Along with others, we propose that the traditional view of prejudice as general attitude is too gross. As our data indeed demonstrate, prejudices go beyond mere negative feelings toward groups to also reflect patterns of specific emotions?anger, fear, disgust, and the like?patterns that conventional measures of prejudice mask. This recognition is important because, as reviewed above, qualitatively different emotions tend to be associated with qualitatively different actions: People have the urge to aggress against those who anger them, escape those who frighten them, and avoid close contact with those who disgust them. Researchers who thus ignore the differences in emotion profiles elicited by different groups will have great difficulty making fine-grained predictions about intergroup behavior. Of course, if one's aim is only to predict whether a group is likely to be discriminated against, in general, then a general attitude assessment may indeed be sufficient. We suspect, however, that there are important implications, theoretical and practical, of being discriminated against via attack, avoidance, or quarantine, and so we prefer the finer level distinctions. Proponents of alternative models of intergroup affect generally share this view, although there exist some important differences in preferred level of emotion specificity. For instance, Dijker, Fiske, and their colleagues (Dijker, 1987; Dijker, Koomen, et al., 1996; Fiske et al., 1999, 2002 ) have used exploratory factor analyses to reduce the number of specific emotions they actually assess to a somewhat smaller set to be analyzed. We think this strategy is less than ideal, for several reasons. First, by its very nature, exploratory analyses force data aggregation in a manner uninformed by insights from the emotions literature, which is increasingly recognizing important distinctions among the different emotions (e.g., Ekman, 1999 ). Second, such an aggregation strategy increases the likelihood that functionally important emotions may be artificially eliminated from investigation because of idiosyncratic features of the analysis (e.g., the other emotions judged, the criteria chosen to select factor dimensions, the relative reliabilities of the different items). Third, for the same reasons that exploratory factor analyses may lead one to omit theoretically relevant emotions, they may also lead one to overaggregate emotions. Finally, the strategy of data-driven aggregation can lead to groups being characterized as similar when they are not. In the research we report here, we have chosen to maintain the demonstrated distinctions among potential intergroup emotions. Note that if our choice of this finer grain size were a poor one, the hypothesized differences in emotion profiles across groups would not have materialized; if contempt, for example, were the more appropriate level of analysis, then we would have observed no differences in participants' reports of anger and disgust. Participants did indeed make such differentiations, however, lending support to our chosen level of affect specificity. We have taken a similar view when contemplating the appropriate specificity at which to consider intergroup threat. In particular, we rely on a theory-driven analysis in which distinct threats remain empirically distinguished from each other. Recall that the revised integrated threat theory (Stephan & Renfro, 2002; Stephan & Stephan, 2000 ) posits that four general categories of constructs?realistic threats to the in-group, symbolic threats to the in-group, realistic threats to the individual, symbolic threats to the individual?are important in intergroup relations. There is clearly some overlap in our approaches. However, in the absence of finer distinctions among threats, revised integrated threat theory will be unable to account for the observed variation in emotional responses to different groups within each umbrella category. Alternative Appraisal Theories The perspectives on intergroup emotions we discuss here share the assumption that different emotions emerge from different appraisals. The approaches differ, however, in their underlying appraisal frameworks. Our sociofunctional perspective proposes that perceptions of specific threats to (and opportunities for) tangible in-group resources and group structures and processes lead to specific intergroup emotions. We articulate our underlying threat-based appraisal theory in detail?see Table 2 ?and have tested its usefulness via multiple regression and cluster analyses. One implication of this appraisal approach is that it allows for the possibility that groups can be perceived as posing multiple threats to one's own group. This, in turn, suggests the value of examining profiles of perceived threats?a value validated by the findings reported here. In contrast to our threat-based appraisal system, the stereotype content model and image theory look for the sources of emotional response in appraisals of the structural relationships between groups. The stereotype content model (Fiske et al., 1999, 2002 ) proposes that intergroup emotions result from individuals' assessments of other groups' warmth (warm vs. cold) and competence (competent vs. incompetent), which emerge from perceptions of each group's competition and status, respectively. These warmth and competence dimensions combine to form a matrix of four possible general views of other groups, and each quadrant engenders a different emotion. An implication of this framework, then, is an exclusive focus on these four emotions (admiration, envy, pity, and contempt). In addition to neglecting the common intergroup emotion of fear, then, and aggregating across anger and disgust, this view does not straightforwardly imply the usefulness of characterizing prejudices in terms of emotional profiles. Along slightly different lines, image theory (M. G. Alexander et al., 1999; Brewer & Alexander, 2002 ) suggests that emotional responses arise from perceptions of other groups on three dimensions: competition, status, and power. Different configurations of these appraisal dimensions produce different images of the other groups, and each image evokes unique specific emotions. Because groups are presumably represented by only one image, image theory also does not straightforwardly suggest the value of characterizing prejudices in terms of emotion profiles. The comprehensive operating appraisal framework underlying IET (e.g., Mackie et al., 2000 ) has not been explicitly articulated but appears to be based on an integration of existing appraisal theories of emotion (prominently cited are Frijda, 1986; Roseman, 1984; Scherer, 1988; C. A. Smith & Ellsworth, 1985 ). That IET theorists have tended to focus their empirical work narrowly on individual components of their apparent appraisal framework may explain an empirical difficulty they recently encountered. Specifically, they predicted that in intergroup situations involving potential threats to personal freedoms and beliefs, participants would respond with anger toward the out-group if the in-group was relatively strong and with fear if the in-group was relatively weak; only the predicted anger reaction emerged, however (Mackie et al., 2000 ). In a later study, however, in which participants faced a scenario involving physical altercation, the predicted fear response was obtained (Devos et al., 2002 ). These findings?though not initially predicted by the IET theorists?are consistent with our threat-based appraisal framework, in which threats to physical safety elicit fear, and obstructions of important goals elicit anger. Nonetheless, some of the similarities between our two approaches appear striking enough that we have suggested elsewhere that one might profitably view the IET and a sociofunctional perspective as complementary, with the IET nested within the broader sociofunctional approach (Neuberg & Cottrell, 2002). Theoretical Breadth We note one additional difference between the sociofunctional framework and the alternatives we have discussed here. Whereas these others are explicitly about prejudice, intergroup affect, or stereotype content, ours is not. The foundation of the sociofunctional framework is an understanding of the universal nature of intragroup structures and processes, and from the foundations of the developing theory, we have derived implications for intergroup affect. However, we have also derived implications for the personal characteristics and traits that people are likely to value (and devalue) for different kinds of groups, for the aspects of self that people are likely to present or manufacture in different social settings, for the kinds of social information that perceivers are especially likely to seek and attune themselves to, for the areas in which legal systems across the globe ought to be similar (or different) from one another, for commonalities (and differences) in the social teachings of different religions, and so forth. We have begun to accumulate data in several of these domains, and they are proving to be consistent with the sociofunctional approach (e.g., Cottrell & Neuberg, 2004; Cottrell, Neuberg, & Li, 2003; Neuberg & Story, 2003 ). The sociofunctional framework is thus broader in its scope. All else being equal, this lends some degree of advantage to it over alternative, but narrower, frameworks. Closing Remarks There can be little doubt that the concept of prejudice has been a useful one and will remain useful to the extent that one is primarily interested in making general predictions across a broad class of discriminatory behaviors. As with most scientific endeavors, however, the deeper one wants to probe and the more one wants to understand, the more precise and textured one's conceptual and operational tools must become. The data reported here clearly illustrate that the traditional view of prejudice?conceptualized as a general attitude and operationalized via simple evaluation items?is often too gross a tool for understanding the often highly textured nature of intergroup affect. Moreover, we believe the sociofunctional approach is better able to account for these findings than current alternatives, none of which make the full set of predictions we have tested here. Finally, many of the currently dominant theoretical explorations of prejudice focus on process?on how prejudices are activated, how they influence cognition and action, how individual and group variables influence these processes, and so forth. By focusing on the contents of social and intergroup relations, we believe the sociofunctional approach provides an important complement to these models. References Alexander, M. G., Brewer, M. B., & Herrmann, R. K. (1999). Images and affect: A functional analysis of out-group stereotypes. Journal of Personality and Social Psychology, 77, 78?93. [Context Link] Alexander, R. D. (1974). The evolution of social behavior. Annual Review of Ecology and Systematics, 4, 325?384. [Context Link] Allport, G. W. (1954). The nature of prejudice. Cambridge, MA: Addison Wesley. [Context Link] Barchas, P. (1986). A sociophysiological orientation to small groups. In E. Lawler (Ed.), Advances in group processes (Vol. 3, pp. 209?246). Greenwich, CT: JAI Press. [Context Link] Blashfield, R. K., & Aldenderfer, M. S. (1988). The methods and problems of cluster analysis. In J. R. Nesselroade & R. B. Cattell (Eds.), Handbook of multivariate experimental psychology (pp. 447?473). New York: Plenum Press. [Context Link] Branscombe, N. R., Doosje, B., & McGarty, C. (2002). Antecedents and consequences of collective guilt. In D. M. Mackie & E. R. Smith (Eds.), From prejudice to intergroup relations: Differentiated reactions to social groups (pp. 49?66). New York: Psychology Press. [Context Link] Brewer, M. B. (1997). On the social origins of human nature. In C. McGarty & S. A. Haslam (Eds.), The message of social psychology: Perspectives on mind in society (pp. 54?62). Cambridge, MA: Blackwell. [Context Link] Brewer, M. B. (2001). Ingroup identification and intergroup conflict: When does ingroup love become outgroup hate? In R. Ashmore, L. Jussim, & D. Wilder (Eds.), Social identity, intergroup conflict, and conflict reduction (pp. 17?41). New York: Oxford University Press. [Context Link] Brewer, M. B., & Alexander, M. G. (2002). Intergroup emotions and images. In D. M. Mackie & E. R. Smith (Eds.), From prejudice to intergroup relations: Differentiated reactions to social groups (pp. 209?225). New York: Psychology Press. [Context Link] Brewer, M. B., & Brown, R. (1998). Intergroup relations. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), The handbook of social psychology (4th ed., pp. 554?594). New York: McGraw-Hill. [Context Link] Brewer, M. B., & Caporael, L. R. (1990). Selfish genes vs. selfish people: Sociobiology as origin myth. Motivation and Emotion, 14, 237?242. [Context Link] Brown, D. E. (1991). Human universals. New York: McGraw-Hill. [Context Link] Campbell, D. T. (1982). Legal and primary-group social controls. Journal of Social and Biological Structures, 5, 431?438. [Context Link] Carver, C. S., & Scheier, M. F. (1990). Origins and functions of positive and negative affect: A control-process view. Psychological Review, 97, 19?35. Ovid Full Text Bibliographic Links [Context Link] Conway, J. M., & Huffcutt, A. I. (2003). A review and evaluation of exploratory factor analysis practices in organizational research. Organizational Research Methods, 6, 147?168. [Context Link] Cosmides, L., & Tooby, J. (2000). Evolutionary psychology and the emotions. In M. Lewis & J. M. Haviland-Jones (Eds.), Handbook of emotions (pp. 91?115). New York: Guilford Press. [Context Link] Cottrell, C. A., & Neuberg, S. L. (2003, February). From patterns of threat to patterns of behavior: Capturing the complexity of intergroup interaction. Paper presented at the annual meeting of the Society for Personality and Social Psychology, Los Angeles. [Context Link] Cottrell, C. A., & Neuberg, S. L. (2004). How do people present themselves to fellow group members? A sociofunctional analysis of valued and devalued self-presentations. Manuscript in preparation, Arizona State University. [Context Link] Cottrell, C. A., Neuberg, S. L., & Asher, T. (2004). [Threat perceptions and emotional reactions toward different groups]. Unpublished raw data, Arizona State University. [Context Link] Cottrell, C. A., Neuberg, S. L., & Li, N. P. (2003, February). What do people want in a group member? A biocultural analysis of valued and devalued characteristics. Poster presented at the annual meeting of the Society for Personality and Social Psychology, Los Angeles. [Context Link] Devine, P. G., & Elliot, A. J. (1995). Are racial stereotypes really fading? The Princeton trilogy revisited. Personality and Social Psychology Bulletin, 11, 1139?1150. [Context Link] Devos, T., Silver, L. A., Mackie, D. M., & Smith, E. R. (2002). Experiencing intergroup emotions. In D. M. Mackie & E. R. Smith (Eds.), From prejudice to intergroup emotions: Differentiated reactions to social groups (pp. 111?134). New York: Psychology Press. [Context Link] Dijker, A. J. (1987). Emotional reactions to ethnic minorities. European Journal of Social Psychology, 17, 305?325. Bibliographic Links [Context Link] Dijker, A. J., Kok, G., & Koomen, W. (1996). Emotional reactions to people with AIDS. Journal of Applied Social Psychology, 26, 731?748. Bibliographic Links [Context Link] Dijker, A. J., Koomen, W., van den Heuvel, H., & Frijda, N. H. (1996). Perceived antecedents of emotional reactions in inter-ethnic relations. British Journal of Social Psychology, 35, 313?329. Bibliographic Links [Context Link] Dunbar, R. I. M. (1988). Primate social systems. Ithaca, NY: Cornell University Press. [Context Link] Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. Power (Eds.), The handbook of cognition and emotion (pp. 45?60). Sussex, England: Wiley. [Context Link] Ekman, P., & Davidson, R. J. (1994). The nature of emotion: Fundamental questions. New York: Oxford University Press. [Context Link] Ekman, P., & Friesen, W. V. (1975). Unmasking the face: A guide to recognizing emotions from facial clues. Englewood Cliffs, NJ: Prentice Hall. [Context Link] Esses, V. M., & Dovidio, J. F. (2002). The role of emotions in determining willingness to engage in intergroup contact. Personality and Social Psychology Bulletin, 28, 1202?1214. [Context Link] Esses, V. M., Haddock, G., & Zanna, M. P. (1993). Values, stereotypes, and emotions as determinants of intergroup attitudes. In D. M. Mackie & D. L. Hamilton (Eds.), Affect, cognition, and stereotyping: Interactive processes in group perception (pp. 137?166). San Diego, CA: Academic Press. [Context Link] Everitt, B. S., & Dunn, G. (2001). Applied multivariate data analysis (2nd ed.). London: Arnold. [Context Link] Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272?299. [Context Link] Fiske, S. T., Cuddy, A. J., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82, 878?902. [Context Link] Fiske, S. T., Xu, J., Cuddy, A. C., & Glick, P. (1999). (Dis)respecting versus (dis)liking: Status and interdependence predict ambivalent stereotypes of competence and warmth. Journal of Social Issues, 55, 473?491. Bibliographic Links [Context Link] Frijda, N. H. (1986). The emotions. Cambridge, England: Cambridge University Press. [Context Link] Gilbert, G. M. (1951). Stereotype persistence and change among college students. Journal of Personality and Social Psychology, 46, 245?254. [Context Link] Haddock, G., & Zanna, M. P. (1994). Preferring ?housewives? to ?feminists?: Categorization and the favorability of attitudes towards women. Psychology of Women Quarterly, 18, 25?52. Bibliographic Links [Context Link] Haddock, G., Zanna, M. P., & Esses, V. M. (1993). Assessing the structure of prejudicial attitudes: The case of attitudes toward homosexuals. Journal of Personality and Social Psychology, 65, 1105?1118. [Context Link] Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1992). Multivariate data analysis (3rd ed.). New York: Macmillan. [Context Link] Higgins, E. T. (1987). Self-discrepancy: A theory relating self and affect. Psychological Review, 94, 319?340. Ovid Full Text Bibliographic Links [Context Link] Hurh, W. M., & Kim, K. C. (1989). The ?success? image of Asian Americans: Its validity, and its practical and theoretical implications. Ethnic and Racial Studies, 12, 512?538. [Context Link] Izard, C. E. (1978). Human emotions. New York: Plenum Press. [Context Link] Izard, C. E. (1991). The psychology of emotions. New York: Plenum Press. [Context Link] Karlins, M., Coffman, T. L., & Walters, G. (1969). On the fading of social stereotypes: Studies in three generations of college students. Journal of Personality and Social Psychology, 13, 1?16. [Context Link] Katz, D., & Braly, K. (1933). Racial stereotypes of one hundred college students. Journal of Abnormal and Social Psychology, 28, 280?290. [Context Link] Lazarus, R. S. (1991). Emotion and adaptation. New York: Oxford University Press. [Context Link] Leakey, R. E. (1978). People of the lake: Mankind and its beginnings. New York: Avon. [Context Link] Leakey, R. E., & Lewin, R. (1977). Origins: What new discoveries reveal about the emergence of our species and its possible future. New York: Dutton. [Context Link] LeVine, R. A., & Campbell, D. T. (1972). Ethnocentrism: Theories of conflict, ethnic attitudes and group behavior. New York: Wiley. [Context Link] Lewis, M. (1993). Self-conscious emotions: Embarrassment, pride, shame, and guilt. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 563?573). New York: Guilford Press. [Context Link] Lickel, B., Hamilton, D. L., Wieczorkowska, G., Lewis, A., Sherman, S. J., & Uhles, A. N. (2000). Varieties of groups and the perception of group entitativity. Journal of Personality and Social Psychology, 78, 223?246. [Context Link] Lickel, B., Schmader, T., & Barquissau, M. (2004). The evocation of moral emotions in intergroup contexts: The distinction between collective guilt and collective shame. In N. R. Branscombe & B. Doosje (Eds.), Collective guilt: International perspectives (pp. 35?55). New York: Cambridge University Press. [Context Link] Mackie, D. M., Devos, T., & Smith, E. R. (2000). Intergroup emotions: Explaining offensive action tendencies in an intergroup context. Journal of Personality and Social Psychology, 79, 602?616. [Context Link] Mackie, D. M., & Smith, E. R. (Eds.). (2002). From prejudice to intergroup emotions: Differentiated reactions to social groups. New York: Psychology Press. [Context Link] Madon, S., Guyll, M., Aboufadel, K., Montiel, E., Smith, A., Palumbo, P., & Jussim, J. (2001). Ethnic and national stereotypes: The Princeton trilogy revisited and revised. Personality and Social Psychology Bulletin, 27, 996?1010. [Context Link] Nesse, R. M. (1990). Evolutionary explanations of emotions. Human Nature, 1, 261?289. Bibliographic Links [Context Link] Neuberg, S. L., & Cottrell, C. A. (2002). Intergroup emotions: A sociofunctional approach. In D. M. Mackie & E. R. Smith (Eds.), From prejudice to intergroup relations: Differentiated reactions to social groups (pp. 265?283). New York: Psychology Press. [Context Link] Neuberg, S. L., Smith, D. M., & Asher, T. (2000). Why people stigmatize: Toward a biocultural framework. In T. F. Heatherton, R. E. Kleck, M. R. Hebl, & J. G. Hull (Eds.), The social psychology of stigma (pp. 31?61). New York: Guilford Press. [Context Link] Neuberg, S. L., & Story, P. (2003). [Cross-religion similarities in social structures and processes: A test of the sociofunctional approach]. Unpublished raw data, Arizona State University. [Context Link] Niemann, Y. F., Jennings, L., Rozelle, R. M., Baxter, J. C., & Sullivan, E. (1994). Use of free response and cluster analysis to determine stereotypes of eight groups. Personality and Social Psychology Bulletin, 20, 379?390. [Context Link] ?hman, A. (1993). Fear and anxiety as emotional phenomena: Clinical phenomenology, evolutionary perspectives, and information processing mechanisms. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 511?536). New York: Guilford Press. [Context Link] Parrott, W. G. (1991). The emotional experiences of envy and jealousy. In P. Salovey (Ed.), The psychology of jealousy and envy (pp. 3?30). New York: Guilford Press. [Context Link] Plutchik, R. (1980). Emotion: A psychoevolutionary synthesis. New York: Harper & Row. [Context Link] Plutchik, R. (2003). Emotions and life: Perspectives from psychology, biology, and evolution. Washington, DC: American Psychological Association. [Context Link] Richerson, P., & Boyd, R. (1995, January). The evolution of human hypersociality. Paper presented at the Ringberg Castle Symposium on Ideology, Warfare and Indoctrinability, Ringberg, Germany. [Context Link] Roseman, I. J. (1984). Cognitive determinants of emotions: A structural theory. In P. Shaver (Ed.), Review of personality and social psychology: Emotions, relationships, and health (pp. 11?36). Beverly Hills, CA: Sage. [Context Link] Rozin, P., Haidt, J., & McCauley, C. R. (1993). Disgust. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 575?594). New York: Guilford Press. [Context Link] Schaller, M., & Neuberg, S. L. (2004). The nature in prejudice(s). Manuscript submitted for publication. [Context Link] Schaller, M., Park, J. H., & Faulkner, J. (2003). Prehistoric dangers and contemporary prejudices. European Review of Social Psychology, 14, 105?137. [Context Link] Scherer, K. R. (1988). Criteria for emotion-antecedent appraisal: A review. In V. Hamilton, G. H. Bower, N. H. Frijda (Eds.), Cognitive perspectives on emotion and motivation (pp. 89?126). Norwell, MA: Kluwer Academic. [Context Link] Shaver, P. R., Hazan, C., & Bradshaw, D. (1988). Love as attachment: The integration of three behavioral systems. In R. J. Sternberg & M. Barnes (Eds.), The anatomy of love (pp. 68?98). New Haven, CT: Yale University Press. [Context Link] Sherif, M. (1966). In common predicament: Social psychology of intergroup conflict and cooperation. Boston: Houghton Mifflin. [Context Link] Simon, H. A. (1967). Motivational and emotional controls of cognition. Psychological Review, 74, 29?39. Bibliographic Links [Context Link] Smith, C. A., & Ellsworth, P. C. (1985). Patterns of cognitive appraisal in emotion. Journal of Personality and Social Psychology, 48, 813?838. [Context Link] Smith, E. R. (1993). Social identity and social emotions: Toward new conceptualizations of prejudice. In D. M. Mackie & D. L. Hamilton (Eds.), Affect, cognition, and stereotyping: Interactive processes in group perception (pp. 297?315). San Diego, CA: Academic Press. [Context Link] Smith, E. R. (1999). Affective and cognitive implications of a group becoming a part of the self: New models of prejudice and of the self-concept. In D. Abrams & M. A. Hogg (Eds.), Social identity and social cognition (pp. 183?196). Malden, MA: Blackwell. [Context Link] Sober, E., & Wilson, D. S. (1998). Unto others: The evolution and psychology of unselfish behavior. Cambridge, MA: Harvard University Press. [Context Link] Stephan, W. G., & Renfro, C. L. (2002). The role of threat in intergroup relations. In D. M. Mackie & E. R. Smith (Eds.), From prejudice to intergroup emotions: Differentiated reactions to social groups (pp. 191?207). New York: Psychology Press. [Context Link] Stephan, W. G., & Stephan, C. W. (2000). An integrated threat theory of prejudice. In S. Oskamp (Ed.), Reducing prejudice and discrimination (pp. 23?45). Mahwah, NJ: Erlbaum. [Context Link] Tooby, J., & Cosmides, L. (1990). The past explains the present: Emotional adaptations and the structure of ancestral environments. Ethology and Sociobiology, 11, 375?424. [Context Link] Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35?37. Bibliographic Links [Context Link] Weiner, B., Perry, R., & Magnusson, J. (1988). An attributional analysis of reactions to stigmas. Journal of Personality and Social Psychology, 55, 738?748. [Context Link] Wilson, D. S., & Sober, E. (1994). Reintroducing group selection to the human behavioral sciences. Behavioral and Brain Sciences, 17, 585?654. [Context Link] Yee, A. H. (1992). Asians as stereotypes and students: Misperceptions that persist. Educational Psychology Review, 4, 95?132. [Context Link] 1In previous writings and presentations (Cottrell & Neuberg, 2003; Neuberg & Cottrell, 2002) we have described this framework as biocultural. We have changed our labeling of these ideas to sociofunctional to better capture our focus on the functional psychological mechanisms that promote effective and successful social living. Note that this is merely a change in label and not in the content of our approach. [Context Link] 2 We are not suggesting that human sociality emerged because it benefits the survival of the group (i.e., a group selection process; see Sober & Wilson, 1998; Wilson & Sober, 1994 ), but rather because it benefits the overall fitness of the individual. Moreover, our evolution-based arguments should not be interpreted as deterministic (nor, for that matter, should any evolution-based argument); the social processes we propose to understand intergroup affect are far from invariable and inevitable. Indeed, these processes, once explicated, lend themselves nicely to effective practical interventions to reduce the maltreatment of groups and people around the globe (for further discussion, see Schaller & Neuberg, 2004 ). Finally, just because we believe that an evolution-inspired analysis shines light on certain unique complexities of intergroup affect does not in any way imply that the psychological processes and outcomes revealed by our analysis are morally, ethically, or legally justifiable. [Context Link] 3 Groups sometimes provide each other with opportunities as well as threats. However, in light of the great bulk of existing prejudice and intergroup relations research, we focus in this article on patterns of threats and related discrete emotions. [Context Link] 4 In exploratory fashion, we included items designed to assess threats that groups may pose to one's own group's moral standing in the hope that they would uniquely predict feelings of guilt. Unfortunately, we worded the items poorly, and the composite appears instead to capture a more general sense of threat. We thus exclude this composite from all analyses to follow but note that including it alters neither our findings nor our conclusions. [Context Link] 5 For each target group, a chi-square difference test revealed that our a priori 10-factor threat model (10 specific threat factors, each represented by an item pair) demonstrated a good fit to the data (as shown by comparative fit index [CFI], root-mean-square error of approximation [RMSEA], and standardized root-mean-square residual [SRMR] values) and fit the data significantly better than a 1-factor threat model (1 general threat factor, represented by all threat items). Similar support was found for our emotion model: Chi-square difference tests revealed that our a priori 5-factor emotion model (anger, disgust, fear, pity, and envy factors, each represented by an item pair) demonstrated a good fit to the data (as shown by CFI, RMSEA, and SRMR values) and fit the data significantly better than a 1-factor emotion model (1 general emotion factor, represented by all emotion items), again for all target groups. Because anger and disgust are sometimes grouped together by exploratory factor analyses (as in research by Fiske et al., 2002 ), we also compared the 5-factor emotion model with a 4-factor emotion model that combined anger and disgust into 1 factor. For seven of the nine target groups, a chi-square difference test revealed that this 4-factor model fit the data significantly worse than the 5-factor model; for the remaining two target groups, the 4-factor model fit worse than our preferred 5-factor alternative, although not significantly so. In all, the CFAs strongly validate our theory-based decisions to use measures of relatively discrete threats and emotions. [Context Link] 6 Because of our unsuccessful attempt to generate a valid measure of morality threat (see Footnote 4), we were unable to conduct our focal threat-emotion analysis for this threat's associated emotion (i.e., a test of the proposed link between threat to in-group morality and guilt). As a result of this failure to fully test our hypotheses related to guilt, we chose to discard guilt from further analyses. Note that including the discarded items in analyses does not alter any of our conclusions. These complete data are available from the authors by request. [Context Link] 7Some of those findings were reported in preliminary form (Neuberg & Cottrell, 2002). The full data sets from these additional samples are available from the authors on request. [Context Link] 8 CFAs also offer some empirical support for this decision to arrange the 10 specific threats into four threat classes. We tested a higher order threat model with the second-order obstacles factor (on which six first-order threat factors load), the second-order contamination factor (on which two first-order threat factors load), the first-order physical safety threat factor, and the first-order nonreciprocity (by inability) threat factor. For each target group, this model demonstrated a marginally adequate fit to the data (as shown by CFI, RMSEA, and SRMR values). Although this four-factor threat model may be less than ideal to capture relationships among the threats, our current purposes rest with explaining threat-emotion links. As such, we have chosen to use this threat representation in which specific threats believed to elicit the same emotion are clustered together into threat classes. Note that the less than ideal status of this measurement model can only work against our hypotheses relating obstacle threats to anger and contamination threats to disgust. [Context Link] 9 We note two additional pieces of corroborative evidence for Hypothesis 5. First, we tested the hypothesized threat-emotion links using group-level multiple regression analyses with the threat and emotion ratings for each target group averaged across all participants; these analyses are limited by the small sample size (nine target groups), which leaves them drastically underpowered. We also tested the hypothesized threat-emotion links using multilevel models that clustered the target group ratings by participant; these analyses provide an appropriate statistical means to account for the nonindependence of target group ratings. In all, both the group-level regression analyses and the multilevel models revealed similar patterns of specific threat-emotion links to those obtained from the individual-level regression analyses on the random samples. [Context Link] 10 Because we assign causal priority to perceived threat, we wanted to calculate the probability of perfectly replicating the five-cluster solution on the basis of threat ratings (as shown in the left side of Table 8 ) in the emotion cluster analyses. After determining the probability of replicating each individual cluster, we calculated the product of these individual probabilities to obtain the probability of a perfect match between the five-cluster threat solution and an emotion cluster solution. We calculated this probability to be .00006. Because Asian Americans were the only group to ?move? clusters from our threat cluster solution to our emotion cluster solution, we recalculated this probability without Asian Americans. For this probability of perfect replication with only the eight remaining groups, we obtained a value of .0003. Information on these calculations is available from the authors. [Context Link] 11We are currently collecting such experimental data. [Context Link] 12We thank Naomi Ellemers for suggesting this interesting possibility. [Context Link] From checker at panix.com Sun Jul 17 00:09:30 2005 From: checker at panix.com (Premise Checker) Date: Sat, 16 Jul 2005 20:09:30 -0400 (EDT) Subject: [Paleopsych] Routledge: Roger Crisp: Moral Particularism Message-ID: Roger Crisp: Moral Particularism The Routledge Encyclopedia of Philosophy Moral particularism is a broad set of views which play down the role of general moral principles in moral philosophy and practice. Particularists stress the role of examples in moral education and of moral sensitivity or judgment in moral decision-making, as well as criticizing moral theories which advocate or rest upon general principles. It has not yet been demonstrated that particularism constitutes an importantly controversial position in moral philosophy. Moral particularism is the view that general moral principles play less of a role in moral thought than has often been claimed. In its most extreme form, particularism states that there are no genuine moral principles, and that therefore moral agents who attempt to guide their action by reference to moral principles, and philosophers who attempt to construct moral theories based on principles, are seriously mistaken (see Moral realism ?5; Situation ethics). Many particularists are influenced by the view of Wittgenstein that one acquires a concept not by being taught some universal rule for its application, but through introduction into a human practice and a way of seeing things (see Wittgensteinian ethics ?3). The particularist view of moral education will stress the importance of examples and actual experience of individual moral cases rather than the learning of universal moral rules under which particular cases can be subsumed (see Examples in ethics ?2; Moral education ?3). A link is often drawn between moral particularism and so-called 'antitheory' in ethics. Antitheorists suggest that ethical theorists are in error in postulating principles according to which actions are right to the extent that they are in accord with these principles. Utilitarianism, for example, claims that acts are right to the extent that they maximize utility (see Utilitarianism). A further link is often alleged between particularism, anti-theory and virtue ethics (see Virtue ethics). But this link may rest on a confusion between particularism about moral theory and particularism about moral agency. Virtue ethics has its own principle: right actions are those that the virtuous person would do. The virtuous person will indeed not in practice proceed by attempting to apply this principle directly. Here, however, virtue ethics and utilitarianism are in agreement, since most utilitarians have claimed that moral agents should not attempt to apply utilitarianism in practice. Moral particularism can also emerge out of the theory of reasons for action. On the view of Dancy (1993), reasons are not universalizable across cases, so that what counts as a reason in one case need not be assumed to function as a reason in the same way in other cases (see Logic of ethical discourse ?7). My enjoying giving you a present counts in favour of my action; but my enjoying torturing you counts against. Against this, it can be suggested that reasons are universalizable at a higher level (innocent pleasure, perhaps, always counts as a reason), and that to deny this is to embrace a form of irrationalism. According to less extreme particularists, principles can play some role in theory and in practice (see Casuistry). On one view, they serve as useful generalizations, but there is always a need for judgment in particular cases (see Moral judgment ?2; Theory and practice ?2). Aristotle is best seen as such a particularist. Once again, it is not clear that, for example, utilitarians or Kantians would wish to deny such a role for judgment (see Kantian ethics; Universalism in ethics ?3). See also: Aesthetics and ethics References and further reading Aristotle (c. mid 4th century BC) Nicomachean Ethics, trans. with notes by T. Irwin, Indianapolis, IN: Hackett Publishing Company, 1985, book 6. NOTE: (Contains an account of moral judgment or 'practical wisdom'.) Dancy, J. (1993) Moral Reasons, Oxford: Blackwell. (Central outline and defence of modern particularism. Difficult.) McDowell, J. (1979) 'Virtue and reason', Monist 62: 331-50. (Defence of a form of particularist virtue ethics involving both Aristotle and Wittgenstein. Difficult.) Ross, W.D. (1930) The Right and the Good, Oxford: Clarendon Press, chaps 1-2. (Has been influential on the development of modern particularism.) From checker at panix.com Sun Jul 17 00:09:36 2005 From: checker at panix.com (Premise Checker) Date: Sat, 16 Jul 2005 20:09:36 -0400 (EDT) Subject: [Paleopsych] Routledge: David B. Wong: Moral Relativism Message-ID: David B. Wong: Moral Relativism The Routledge Encyclopedia of Philosophy Often the subject of heated debate, moral relativism is a cluster of doctrines concerning diversity of moral judgment across time, societies and individuals. Descriptive relativism is the doctrine that extensive diversity exists and that it concerns values and principles central to moralities. Meta-ethical relativism is the doctrine that there is no single true or most justified morality. Normative relativism is the doctrine that it is morally wrong to pass judgment on or to interfere with the moral practices of others who have adopted moralities different from one's own. Much debate about relativism revolves around the questions of whether descriptive relativism accurately portrays moral diversity and whether actual diversity supports meta-ethical and normative relativism. Some critics also fear that relativism can slide into nihilism. 1. Descriptive relativism 2. Meta-ethical relativism 3. Normative relativism 4. Relativism and moral confidence 1. Descriptive relativism From the beginnings of the Western tradition philosophers have debated the nature and implications of moral diversity. Differences in customs and values the Greeks encountered through trade, travel and war motivated the argument attributed to the sophist Protagoras in Platos Theaetetus: that human custom determines what is fine and ugly, just and unjust (see Protagoras). Anthropologists in the twentieth century, such as Ruth Benedict (1934), have emphasized the fundamental differences between the moralities of small-scale traditional societies and the modern West. For example, many traditional societies are focused on community-centred values that require the promotion and sustenance of a common life of relationships, in contrast to both the deontological morality of individual rights and the morality of utilitarianism that are the most prominent within modern Western moral philosophy. Within this philosophy itself moral diversity is represented by the debates between utilitarians and deontologists, and more recently criticism of both camps by defenders of virtue theory and communitarianism (see Deontological ethics; Utilitarianism; Virtue ethics; Community and communitarianism). Such differences have motivated the doctrine of descriptive relativism: that there exists extensive diversity of moral judgment across time, societies and individuals, and that it concerns central moral values and principles. Critics of descriptive relativism argue that it fails to account for important moral similarities across cultures such as prohibitions against killing innocents and provisions for educating and socializing the young. A relativist response given by Michael Walzer (1987) is to argue that shared norms must be described in an extremely general way and that once one examines the concrete forms they take in different societies, one sees significant variety, for example, in which persons count as 'innocent'. The descriptive relativist might go so far as to assert that no significant similarities exist, but an alternative position is that broad similarities exist that are compatible with significant differences among the moralities human beings have held. Critics of descriptive relativism also argue that many moral beliefs presuppose religious and metaphysical beliefs, and that these beliefs, rather than any difference in fundamental values, give rise to much moral diversity (see Religion and morality ?3). Also, differences in moral belief across different societies may not arise from differences in fundamental values but from the need to implement the same values in different ways given the varying conditions obtaining in these societies. One relativist reply is that while such explanations apply to some moral disagreements, they cannot apply to many others, such as disagreements over the rightness of eating animals or the moral status of the foetus or the rightness of sacrificing an innocent person for the sake of a hundred more. 2. Meta-ethical relativism The most heated debate about relativism revolves around the question of whether descriptive relativism supports meta-ethical relativism: that there is no single true or most justified morality. There is no direct path from descriptive to meta-ethical relativism; the most plausible argument for meta-ethical relativism is that it is part of a larger theory of morality that best explains actual moral diversity. Critics of meta-ethical relativism point out that moral disagreement is consistent with the possibility that some moral judgments are truer or more justified than others, just as disagreement among scientists does not imply that truth is relative in science. Some relativists are unimpressed by the analogy with science, holding that disagreements about the structure of the world can be sufficiently radical to undermine the assumption that there is an absolute truth to be found. This defence of meta-ethical relativism amounts to founding it upon a comprehensive epistemological relativism that expresses scepticism about the meaningfulness of talking about truth defined independently of the theories and justificatory practices of particular communities of discourse (see Epistemic relativism). An alternative relativist response is to take a nonrelativist stance towards science and to drive a wedge between scientific and moral discourse. Defenders of such a morality-specific meta-ethical relativism argue that scientific disagreements can be explained in ways that are consistent with there being a nonrelative truth about the structure of the physical world while moral disagreements cannot be treated analogously. For example, much scientific disagreement may be traced to insufficient or ambiguous evidence or distortions of judgment stemming from personal interests. Relativists have argued that such explanations will not work for moral disagreements such as the ones mentioned above concerning the eating of animals, abortion, and the sacrifice of an innocent to save more lives. In offering alternative explanations of moral disagreement, morality-specific relativists tend to adopt a 'naturalistic' approach to morality in the sense that they privilege a scientific view of the world and fit their conceptions of morality and moral disagreement within that view. They deny that moral values and principles constitute an irreducible part of the fabric of the world and argue that morality is best explained on the theory that it arises at least in part from custom and convention. On Wong's view (1984), for example, a good part of morality arises out of the need to structure and regulate social cooperation and to resolve conflicts of interest. Meta-ethical relativism is true because there is no single valid way to structure social cooperation. Morality-specific relativism divides into cognitive and non-cognitive versions (see Moral judgment ?1). On C.L. Stevensons emotivist view (1944), for example, moral discourse merely expresses emotion and influences the attitudes and conduct of others (see Emotivism). Cognitive relativists, such as Mackie, Harman, Foot and Wong, interpret moral judgments as expressing belief, on the grounds that moral judgments are often argued or judged true or false on the basis of reasons. Within cognitive relativism, there are those who believe that there is no single true morality because more than one morality is true, and those who believe that there is no single true morality because all are false. J.L. Mackie (1977) represents the latter camp, on the ground that while morality actually arises out of custom and convention, the meanings of moral terms presuppose a mistaken reference to sui generis properties that provide everyone with a reason for acting according to morality (see Value, ontological status of). Other cognitive relativists see no need to construe moral terms as containing a reference to nonexistent properties and instead tie their cognitive content to certain standards and rules. According to such a standards relativism, moral language is used to judge and to prescribe in accordance with a set of standards and rules. Different sets of standards and rules get encoded into the meaning of ethical terms such as 'good', 'right' and 'ought' over time, and into individuals, groups, or societies in such a way that two apparently conflicting moral beliefs can both be true. Though under a relativist analysis the beliefs express no conflicting claims about what is true, they do conflict as prescriptions as to what is to be done or as to what kinds of things are to be pursued. The disagreement is purely pragmatic in nature, though parties to the disagreement may not be aware of this if they erroneously assume they share the relevant standards. Another crucial question for the standards relativist concerns whose standards and rules apply when someone makes a moral judgment. Suppose that Jones makes a moral judgment about what Smith ought to do, but that the standards Jones applies to guide his own conduct are not the same as the standards Smith uses to guide hers. One possibility is that Jones uses Smith's standards to judge what she ought to do. Another possibility offered by Harman in some of his writing about relativism is that one must judge others by standards one shares with them. His theory is that morality consists of implicit agreements for the structuring of social cooperation. Moral judgments implying that the subjects have a reason to do what is prescribed make sense only as prescriptions based on what the speakers and subjects (and the intended audience of the judgments) have agreed to do. Other standards relativists observe that people use their own standards in judging the conduct of others, whether or not they believe these others to share their standards. There are radical and moderate versions of meta-ethical relativism. Radical relativists hold that any morality is as true or as justified as any other. Moderate relativists, such as Foot (1978), Walzer and Wong (1984), deny that there is any single true morality but also hold that some moralities are truer or more justified than others. On Wong's view, for instance, certain determinate features of human nature and similarities in the circumstances and requirements of social cooperation combine to produce universal constraints on what an adequate morality must be like. It may be argued, for example, that a common feature of adequate moralities is the specification of duties to care and educate the young, a necessity given the prolonged state of dependency of human offspring and the fact that they require a good deal of teaching to play their roles in social cooperation. It may also be a common feature of adequate moralities to require of the young reciprocal duties to honour and respect those who bring them up, and this may arise partly from role that such reciprocity plays in ensuring that those who are charged with caring for the young have sufficient motivation to do so. Such common features are compatible with the recognition that adequate moralities could vary significantly in their conceptions of what values that cooperation should most realize. Some moralities could place the most emphasis on community-centred values that require the promotion and sustenance of a common life of relationships, others could emphasize individual rights, and still others could emphasize the promotion of utility. 3. Normative relativism Does meta-ethical relativism have substantive implications for action? Normative relativism - the doctrine that it is morally wrong to pass judgment on or to interfere with the moral practices of others who have adopted moralities different from one's own - is often defended by anthropologists, perhaps in reaction to those Western conceptions of the inferiority of other cultures that played a role in colonialism. It also has application to disagreements within a society such as that concerning the morality of abortion, where the positions of the disputing parties seem ultimately to be based on fundamentally different conceptions of personhood. As in the case of descriptive and meta-ethical relativism, however, there is no direct path from metaphysical to normative relativism. One could hold consistently that there is no single true morality while judging and interfering with others on the basis of one's own morality. Wong has proposed a version of normative relativism consistent with the point that nothing normative follows straightforwardly from meta-ethical relativism. Meta-ethical relativism needs to be supplemented with a liberal contractualist ethic to imply an ethic of nonintervention. A liberal contractualist ethic requires that moral principles be justifiable to the individuals governed by these principles. If no single morality is most justified for everyone, liberal normative relativism may require one not to interfere with those who have a different morality, though the requirement of noninterference may not be absolute when it comes into conflict with other moral requirements such as prohibitions against torture or the killing of innocents (see Liberalism). 4. Relativism and moral confidence A reason why relativism has been feared is the thought that it could easily slide into moral nihilism. Could one continue living according to one's moral values, which sometimes require significant personal sacrifice, if one can no longer believe that they are truer or more justified than other values that require incompatible actions? One relativist response is that one may reasonably question the importance of certain features of one's morality upon adopting a view of their conventional origin. Consider that duties to give aid to others are commonly regarded as less stringent than duties not to harm them. Gilbert Harman (1975) has proposed that this difference results from the superior bargaining position of those with greater material means in the implicit agreement giving rise to morality. Those with lesser material means may reasonably question this feature of morality, if they are persuaded of Harman's explanation. Notice, however, that it is not merely the supposition that this feature arose from convention that may undermine one's confidence in it. With regard to other features of one's morality, one may adopt a relativist view of them and continue to prize them simply because they are as good as any other and because they help to constitute a way of life that is one's own. Admittedly, people who condemn torture and unremitting cruelty as an offence against the moral fabric of the world may possess a certitude not available to relativists and may find it easier to make the personal sacrifices morality requires. Moral certitude has its own liabilities, however, and has itself contributed to the unremitting cruelty that human beings have inflicted upon each other. See also: Morality and ethics; Relativism; Social relativism References and further reading Benedict, R. (1934) Patterns of Culture, New York: Penguin. (Argues that different cultures are organized around different and incommensurable values.) Foot, P. (1978) Moral Relativism (The Lindley Lectures), Lawrence, KS: University of Kansas Press. NOTE: (Defends a form of moderate relativism.) Harman, G. (1975) 'Moral Relativism Defended', Philosophical Review 84: 3-22. NOTE: (Argues that morality is founded on implicit agreement and that moral 'ought to do' judgments presuppose that speaker, subject and intended audience share the relevant moral standards.) Harman, G. (1984) 'Is There a Single True Morality?, in D. Copp and D. Zimmerman (eds) Morality, Reason and Truth: New Essays on the Foundations of Ethics, Totowa, NJ: Rowman and Allanheld. (Discusses the relation between a naturalistic approach to morality and relativism.) Harman, G. and Thomson, J. (1996) Moral Relativism and Moral Objectivity, Cambridge, MA: Blackwell. NOTE: (Most comprehensive statement of Harman's relativism. Modifies some earlier positions taken.) Herskovits, M. (1972) Cultural Relativism: Perspectives in Cultural Pluralism, New York: Vintage Books. (Anthropologist argues for meta-ethical and normative relativism.) Krausz, M. (1989) Relativism: Interpretation and Confrontation, Notre Dame, IN: University of Notre Dame Press. (Besides the articles from this volume specifically identified here, this is a good survey of different perspectives on descriptive and meta-ethical relativism.) Ladd, J. (1973) Ethical Relativism, Belmont, MA: Wadsworth. (A collection of philosophical and anthropological essays on descriptive and meta-ethical relativism.) MacIntyre, A. (1988) Whose Justice? Which Rationality?, Notre Dame, IN: University of Notre Dame Press. NOTE: (Accepts a strong version of descriptive relativism in which different moral traditions contain incommensurable values and standards of rational justification, but argues against meta-ethical relativism on the grounds that traditions may be compared with respect to their ability to resolve internal problems and to explain why other traditions have failed to solve their own problems.) Mackie, J.L. (1977) Ethics: Inventing Right and Wrong, Harmondsworth: Penguin. (Defends a sceptical form of relativism under which moral judgments lack the objectivity they purport to have. Hence no standard moral judgments are true.) Nagel, T. (1986) The View from Nowhere, New York: Oxford University Press. (Criticism of arguments for meta-ethical relativism from moral diversity.) Plato (c.380-367 BC) Theaetetus, in The Collected Dialogues of Plato, ed. E. Hamilton and H. Cairns, Princeton, NJ: Princeton University Press, 1961. NOTE: (Statement of a conventionalist and relativist view of morality attributed to Protagoras.) Scanlon, T.M. (1995) 'Fear of Relativism', in R. Hursthouse, G. Lawrence and W. Quinn (eds) Virtue and Reasons: Philippa Foot and Moral Theory, Oxford: Clarendon Press. (Discussion of why relativism appears to be a threat to the importance of morality.) Stevenson, C.L. (1944) Ethics and Language, New Haven, CT: Yale University Press. (Defends a noncognitivist theory of moral judgment.) Walzer, M. (1987) Interpretation and Social Criticism, Cambridge, MA: Harvard University Press. NOTE: (Defence of moderate meta-ethical relativism based on the theory that the meaning of general values is given through specific practices.) Williams, B. (1972) Morality: An Introduction to Ethics, New York: Harper & Row. (Criticism of some versions of meta-ethical and normative relativism.) Wong, D. (1984) Moral Relativity, Berkeley, CA: University of California Press. (A defence of moderate relativism based on a naturalistic approach. Some chapters presuppose contemporary philosophy of language that some may regard as technical.) Wong, D. (1991) 'Three Kinds of Incommensurability', in M. Krausz (ed.) Relativism: Interpretation and Confrontation, Notre Dame, IN: University of Notre Dame Press. (Discusses ways in which value differences between cultures may result in different criteria for the rationality of belief about the world.) Wong, D. (1996) 'Pluralistic Relativism', Midwest Studies in Philosophy 20: 378-400. NOTE: (More discussion about the constraints that all adequate moralities would have to meet.) From checker at panix.com Sun Jul 17 00:11:10 2005 From: checker at panix.com (Premise Checker) Date: Sat, 16 Jul 2005 20:11:10 -0400 (EDT) Subject: [Paleopsych] Routledge: Onora O'Neill: Universalism in Ethics Message-ID: Onora O'Neill: Universalism in Ethics The Routledge Encyclopedia of Philosophy The claim that ethical standards or principles are universal is an ancient commonplace of many ethical traditions and of contemporary political life, particularly in appeals to universal human rights. Yet it remains controversial. There are many sources of controversy. Universalism in ethics may be identified with claims about the form, scope or content of ethical principles, or with the very idea that ethical judgment appeals to principles, rather than to particular cases. Or it may be identified with various claims to identify a single fundamental universal principle, from which all other ethical principles and judgments derive. These disagreements can be clarified, and perhaps in part resolved, by distinguishing a number of different conceptions of universalism in ethics. 1. Form and scope: principles for everybody 2. Content: formal principles or uniform requirements? 3. Universalism and particularism: principles and judgment 4. Fundamental principles: 'golden rules' 5. Fundamental principles: Kantian universalizability 1. Form and scope: principles for everybody One distinctive understanding of universalism in ethics is that ethical principles are principles for everybody. They prescribe obligations for everybody, define rights for everybody, list virtues for everybody. The most minimal version of ethical universalism is a claim about the form of ethical principles or standards. It is the claim that ethical principles hold for all and not merely for some, that is, for everybody without exception. Those who hold that ethical principles are universal in form often disagree about their scope, that is to say about which beings comprise 'everybody'. Plato's character Meno tells Socrates that there are quite different virtues for men and women, for boys and girls, for old men and slaves (Meno 71e). On the other hand, Cicero famously asserted that 'there will not be different laws at Rome and at Athens, now and in the future, but one eternal and unchangeable law for all nations and all times' (De Republica III, 33) (see Cicero, M.T. ?2); and St Paul proclaimed that 'there is neither Jew nor Greek, there is neither bond nor free, there is neither male nor female: for ye are all one in Christ Jesus' (Galatians 3:28). One very influential understanding of universalism in ethics, shared by many religions, by the natural law and liberal traditions, and by many others, is the contention that ethical principles are universal in form and cosmopolitan in scope, in that they hold for all humans. However, any cosmopolitan view of the scope of ethical principles must note that living up to obligations, virtues and even some rights is impossible for human beings who lack mature capacities for action. Neither infants nor small children, neither the retarded nor the senile, can be held accountable for carrying out obligations, living virtuously or exercising certain rights, such as political rights. Yet humans who lack these capacities might have other rights, for example, to care and protection. Those who can suffer but not act can be moral patients but not moral agents, possessing some rights, but not the full range of obligations, virtues or rights. The scope of different sorts of principles of universal form will evidently have to vary (see Moral agents; Responsibility). Many think that the scope of some ethical principles is more-than-cosmopolitan. Jeremy Bentham famously declared that the criterion of moral standing was 'not, Can they reason? nor, Can they talk? but Can they suffer?' (1789: 412, footnote; original emphasis); Hindus and Buddhists too extend moral concern beyond humankind; some environmentalists extend concern not only to nonhuman animals, but to plants, even to species and habitats (see Animals and ethics; Bentham, J. ?2; Duty and virtue, Indian conceptions of; Environmental ethics; Moral standing ?2-3). Other advocates of principles of universal form join Meno in holding that their scope is less-than-cosmopolitan. For example, communitarians (who sometimes describe themselves as rejecting universal principles) advocate principles of universal form whose scope is restricted to particular communities (see Community and communitarianism); some virtue ethicists hold similar views (see Virtue ethics ?5). 2. Content: formal principles or uniform requirements? A second conception of universalism in ethics emphasizes the content as well as the form and scope of principles. Principles which hold for everybody will prescribe or recommend the same for everybody (same obligations, same rights, same virtues and so on). Advocates of universal principles see this as a merit: they see equality of requirement and entitlement as ethically important (see Equality ?3). For example, discussions of universal human rights emphasize not only that all humans have rights, but that they all have the same rights. Two objections are commonly raised. The first is that principles which prescribe the same for all will be abstract and general, so provide too little guidance. The second is that they will be too demanding and specific, prescribing with senseless and heartless uniformity for differing cases and situations. On this account, universal principles are either too formal and minimal or else too uniformly demanding. Evidently the two criticisms cannot both be true of one and the same universal principle. If a principle is so abstract that it provides no practical guidance, then it will not prescribe rigid uniformity of action; conversely, if it prescribes with rigid uniformity it will not fail to guide action. The charge that ethical principles which prescribe the same for all abstract from differences between cases is true, but not damaging. No principle of action - whether of universal or non-universal form, whether of cosmopolitan or lesser scope - can prescribe with total specificity; even very explicit principles abstract from many circumstances. It follows that principles of action can always be satisfied in varied ways. A principle such as 'Tell the truth' does not prescribe what we must say to whom or when; a principle such as 'Pay your debts' does not determine the means or manner of repayment. Principles of action, including ethical principles, constrain action or entitlements, rather than picking out a single, wholly determinate line of action. Abstract principles can therefore guide action yet allow for flexible interpretation or application that takes account of differences between cases. So an ethics of universal principles can readily avoid both barren formalism and doctrinaire rigorism. 3. Universalism and particularism: principles and judgment Since universal ethical principles are always to some extent abstract or indeterminate they must be supplemented by judgment in selecting among possible implementations. This point is recognized, indeed stressed, by advocates of universal principles. Kant (1781/1787), for example, insisted that there cannot be complete rules for judgment, and that principles cannot entail their determinate applications, which require judgment (see Kant, I. ?12). The serious disagreement lies not between those who think that ethics needs only principles and those who think it needs only judgment of particular cases, but between those who think principles and judgment are both needed and those who believe that judgment alone will be enough. The deepest opposition to any sort of ethical universalism comes from ethical particularists who hold that unmediated apprehension of particular cases can guide ethical life. Ethical particularists seek to anchor ethical judgment in perception of and responsiveness to the particular, in attentiveness to the case at hand, in the salience of the personal relationship and its claims (see Friendship; Impartiality ?4). They usually hold that ethical life revolves around character and virtue. The most radical cast doubt on the very conception of following a rule or principle; the less radical cite the issues of ?2 as evidence that an ethics of rules and duties is inadequate (see Moral particularism). Both ethical particularists (who appeal only to judgment) and universalists (who argue that judgment is used in combination with principles) have found it difficult to explain how judgment works. Some particularists describe it as analogous to perceiving or attending or to the exercise of a craft skill. Some universalists see judgment as the skill of identifying acts that fall within the constraints set by a plurality of principles (see Moral judgment). 4. Fundamental principles: 'golden rules' Other conceptions of universalism in ethics combine views of the form, scope and sameness of content of principles with ambitious claims that a single fundamental universal principle provides the basis for all derivative ethical principles and ultimately for ethical judgment of particular cases. Often the proposed fundamental principle is a version of a 'golden rule'. Variously formulated golden rules are found in Hindu and Confucian sacred texts, and in many other traditions, including natural law and popular ethical debate. One well known golden rule is Christian with Jewish antecedents: 'Do unto others as you would that they should do unto you' (for specific formulations see Matthew 7:12, Luke 6:31; for antecedents Tobias 4:15). Others are prohibitions rather than injunctions, such as 'Do not do unto others what you would not have them do unto you'. These would-be foundational principles have been criticized for linking ethics too closely to agents desires or consent. Why should willingness to be on the receiving end of like action make it permissible? If masochists are willing to suffer others' sadism, would that make sadism right? More generally, can acceptance of being on the receiving end of like action legitimate anything? This problem can be overcome only by building additional constraints or complexities into the idea of considering what one would desire or consent to when putting oneself into another's shoes. This has been attempted in various principles, which are first cousin to golden rules, that have been influential in secular work in ethics. Most famously J.S. Mill asserted, in Utilitarianism (1861), that 'in the golden rule of Jesus of Nazareth, we read the complete spirit of the ethics of utility'. The link Mill draws between utilitarianism and golden rules arises only if agents consider not what they as individuals would want if on the receiving end, but what they if taking account of all others' desires would want if on the receiving end. Only then can a golden rule reflect everybody's desires, and so be thought of as aiming at the greatest happiness (see Utilitarianism; Mill, J.S. ?10). Other approaches of this sort have recently been advocated by P. Singer (1972), R.M. Hare (1975) and A. Gewirth (1987), each of whom recognizes affinities as well as differences between his proposal for the foundations of ethics and traditional golden rules. For example, Gewirth suggests that a rational golden rule would read 'Do unto others as you have a right that they do unto you', while Hare advocates a universal prescriptivism by which the fundamental criterion for ethical judgment is that agents be willing to universalize their judgments, that is extend them to all situations identical in their universal properties (see Prescriptivism; Hare, R.M. ??1-2). There has been much discussion of the plausibility of these proposals, which generally reject the emphasis traditional golden rules give to what one would have if the particular victim reciprocated, and introduce some reference to what one would want if one's own principle were to be universally adopted or if one's desires took account of others' desires. These writers advocate a strong form of ethical universalism: not merely do they defend a single fundamental ethical principle, but they insist that it refer to the desires that all hold, or ought if rational to hold. 5. Fundamental principles: Kantian universalizability An alternative conception of universalism in ethics rejects golden rules and seeks to anchor all ethical justification in a more formal fundamental universal principle, which does not refer to desires or consent to fix the content of ethics. The most famous and most ambitious attempt to go further is Kant's 'categorical imperative', of which the best known version runs: 'Act only on that maxim through which you can at the same time will that it should become a universal law' ([1785] 1903: 421). Kant claims to show that 'all imperatives of duty can be derived from this one imperative as their principle' (421). He insists that in such derivations no reference be made either to anyones happiness or desires, consent or agreement, and that the categorical imperative is not a version of a golden rule (which he dismisses as trivial, 430, footnote). Kant's views have been influential: a German scholar recently commented that 'Kant succeeded with his objection almost in invalidating the golden rule and disqualifying it from future discussion in ethics' (Reiner 1983: 274). English language philosophy has been less convinced that Kant undermined golden rule approaches. J.S. Mill was neither the first nor the last to think that Kant's claim to derive all principles of duty from the categorical imperative was complete nonsense. He wrote of Kant when he begins to deduce from this precept any of the actual duties of morality, he fails, almost grotesquely, to show that there would be any contradiction, any logical (not to say physical) impossibility, in the adoption by all rational beings of the most outrageously immoral rules of conduct. All he shows is that the consequences of their universal adoption would be such as no one would choose to incur. (1861: 207; original emphasis) There has been widespread scepticism about Kant's supposed claim to show that 'immoral rules of conduct' are self-contradictory. However, he in fact makes the more circumspect modal claim that we should not act on principles which we cannot simultaneously 'will as universal laws'. An example of such a principle is that of false promising. Kant holds that false promisers who try (incoherently) to will false promising as a universal law thereby will the destruction of the very trust on which their own attempts to promise falsely must rely. Hence when we try to act on such principles Kant holds that we in fact do not will that our maxim (principle) should become a universal law - since this is impossible for us - but rather that its opposite should remain a law universally: we only take the liberty of making an exception to it for ourselves (or even just for this once). ([1785] 1903: 424; original emphasis) In 'deriving' an 'actual principle of duty' from the categorical imperative, Kant takes it that agents not only seek principles of universal form and cosmopolitan scope which prescribe the same for all, but shun any principles which cannot be willed for all'. Kantian justifications of such principles, unlike golden rule justifications, do not appeal to either the desires, the happiness or the acceptance of those on the receiving end, nor indeed to actual or hypothetical desires of any or of all agents. The distinctive modal character of Kantian universalizability is its appeal to what can be willed for all (rather than to what actually is or hypothetically would be willed by all). It remains a matter of considerable controversy whether a strictly Kantian approach can be used to construct an account of specific principles of duty, virtue or entitlement, or whether it is indeed too formal and minimal to sustain these derivations. See also: Critical Theory; Intuitionism in ethics; Theological Virtues References and further reading Bentham, J. (1789) An Introduction to the Principles of Morals and Legislation, ed. J.H. Burns and H.L.A. Hart, revised F. Rosen, Oxford: Clarendon Press, 1996. (Classic statement of utilitarianism.) Cicero, M.T. (54-51 BC) De Republica, trans. M. Grant, On Government, Harmondsworth: Penguin, 1993, books III, V and VI. (Early form of universalism about scope of moral principles.) Gewirth, A. (1987) 'The Golden Rule Rationalized', Midwest Studies in Philosophy 3: 133-44. NOTE: (Modern form of universalism, with affinities to golden rules.) Gewirth, A. (1988) 'Ethical Universalism and Particularism', Journal of Philosophy 85: 283-301. (A universalist approach to ethical judgment.) Hare, R.M. (1963) Freedom and Reason, Oxford: Clarendon Press. (Basic source on universal prescriptivism.) Hare, R.M. (1975) 'Abortion and the Golden Rule', Philosophy and Public Affairs 3: 201-22. (Application of universal prescriptivism to problem of abortion; includes discussion of golden rules.) Herman, B. (1993) The Practice of Moral Judgment, Cambridge, MA: Harvard University Press. NOTE: (Insightful discussions of difficulties raised about Kantian ethics.) Kant, I. (1781/1787) Kritik der Reinen Vernunft, trans. N. Kemp Smith, Critique of Pure Reason, London: Macmillan, 1973. NOTE: (Locus classicus for Kant's insistence that there cannot be complete rules for judgment.) Kant, I. (1785) Grundlegung zur Metaphysik der Sitten, in Kants gesammelte Schriften, ed. Koniglichen Preu?ischen Akademie der Wissenschaften, Berlin: Reimer, vol. 4, 1903; trans. H.J. Paton, Groundwork of the Metaphysics of Morals (originally The Moral Law), London: Hutchinson, 1948; repr. New York: Harper & Row, 1964. NOTE: (References made to this work in the entry give the page number from the 1903 Berlin Akademie volume; these page numbers are included in the Paton translation. Classic, short, if difficult, exposition of Kant's ethics.) McDowell, J. (1979) 'Virtue and Reason', Monist 62 (3): 331-50; revised version repr. as 'Non-cognitivism and Rule Following', in S. Holtzman and C. Leach (eds) Wittgenstein: To Follow a Rule, London: Routledge & Kegan Paul, 1981. 141-62. NOTE: (An influential statement of a radical particularist position, which questions the very possibility of following rules or principles.) Mill, J.S. (1861) Utilitarianism, in J.M. Robson (ed.) Collected Works of John Stuart Mill, vol. 10, Essays on Ethics, Religion and Society, Toronto: University of Toronto Press, 1969. (Account of utilitarianism expanding on Bentham.) O'Neill, O. (1987) 'Abstraction, Idealization and Ideology in Ethics', in J.D.G. Evans (ed.) Moral Philosophy and Contemporary Problems, Cambridge: Cambridge University Press. (What is abstraction? Is it avoidable? Is it harmful? In asking these questions, this essay is particularly relevant to ??2 and 3.) O'Neill, O. (1989) Constructions of Reason: Explorations of Kant's Practical Philosophy, Cambridge: Cambridge University Press. (Papers on patterns of universalist ethical reasoning that use Kant's categorical imperative.) O'Neill, O. (1991) 'Kantian Ethics', in P. Singer (ed.) A Companion to Ethics, Oxford: Blackwell, 175-85. (Overview of Kant's position and some well-known criticisms). Plato (c.386-380 BC) Meno, in Protagoras and Meno, trans. W.K.C. Guthrie, Harmondsworth: Penguin, 1956. (Discusses the nature of virtue.) Potter, N. and Timmons, M. (eds) (1985) Morality and Universality: Essays on Ethical Universalizability, Dordrecht: Reidel. NOTE: (Useful papers on different conceptions of universality in ethics; large bibliography.) Reiner, H. (1983) 'The Golden Rule and the Natural Law', in Duty and Inclination: The Fundamentals of Morality Discussed and Redefined with Special Regard to Kant and Schiller, trans. M. Santos, The Hague: Nijhoff, 271-93. (Historical overview; useful references especially to German literature.) Singer, P. (1972) 'Famine, Affluence and Morality', Philosophy and Public Affairs 1: 229-43. NOTE: (Well-known utilitarian argument to show that beneficence should have cosmopolitan scope: the affluent should help the hungry however far away they may be.) Wattles, J. (1996) The Golden Rule, Oxford: Oxford University Press. (Examines the principle 'Do to others as you want others to do to you' in contexts of psychology, philosophy and religion, from Confucius, Hillel and Jesus to R.M. Hare and Paul Ricoeur.) Wiggins, D. (1980) 'Deliberation and Practical Reason', in Needs, Values, Truth: Essays in the Philosophy of Value, Aristotelian Society Series, vol. 6, Oxford: Blackwell, 1987; revised edn, 1991. (A particularist reading of Aristotle on judgment.) Williams, B. (1985) Ethics and the Limits of Philosophy, Cambridge, MA: Harvard University Press and London: Fontana. NOTE: (Particularist criticism of aspects of ethical universalism.) From checker at panix.com Sun Jul 17 00:11:17 2005 From: checker at panix.com (Premise Checker) Date: Sat, 16 Jul 2005 20:11:17 -0400 (EDT) Subject: [Paleopsych] Routledge: Daniel M. Weinstock: Moral Pluralism Message-ID: Daniel M. Weinstock: Moral Pluralism The Routledge Encyclopedia of Philosophy Moral pluralism is the view that moral values, norms, ideals, duties and virtues are irreducibly diverse: morality serves many purposes relating to a wide range of human interests, and it is therefore unlikely that a theory unified around a single moral consideration will account for all the resulting values. Unlike relativism, however, moral pluralism holds that there are rational constraints on what can count as a moral value. One possible, though not necessary, implication of moral pluralism is the existence of real moral dilemmas. Some philosophers have deemed these to be inconceivable; in fact, however, they do not constitute a serious threat to practical reason. Another possible implication of moral pluralism is the existence within a society of radically different but equally permissible moralities. This poses a challenge for political philosophy, and might justify a liberal view that particular conceptions of the good life ought not to be invoked in the formulation of public policy. 1. Moral pluralism and moral theory 2. Relativism and moral dilemmas 3. Moral pluralism and political philosophy 1. Moral pluralism and moral theory Moral pluralism is the view that moral values, norms, ideals, duties and virtues cannot be reduced to any one foundational consideration, but that they are rather irreducibly diverse. As such, moral pluralism is a metaphysical thesis, in that it tells us what moral considerations there are. Pluralist moral philosophers disagree as to exactly what the plural sources of moral value are. For example, Sir David Ross (1930) distinguished six species of duty, including duties of fidelity and reparation, of gratitude, of justice, of beneficence, of self-improvement and of non-maleficence; and Thomas Nagel (1979) has claimed that the conflicts among diverse moral principles are due to there being five distinct sources of value - special allegiances, universal rights, utility, perfectionist ends of self-development, and individual projects (see Nagel, T. ?5). Despite the differences between these accounts of the sources of moral value, the resolution of which constitutes a challenge for substantive moral theory, these thinkers can be seen as united in the view that morality has developed to protect and promote basic interests related to human wellbeing and flourishing, but that since there is no unique form that human wellbeing must take, there can consequently not be a theory of morality unified around one supreme value (see Happiness ?3; Welfare). This is not to say that the truth of moral pluralism disqualifies any attempt at formulating a moral theory (see Morality and ethics ??1-2). Among the many moral values which human beings pursue, there are undoubtedly some that can be grouped together and accounted for in terms of some more general value relevant to the particular set of human interests with which they are all in one way or another concerned. Moral pluralism implies simply that none of these values could plausibly claim hegemony over the entire set of moral considerations. Historically, moral pluralism has been linked with controversial positions in moral epistemology and the ontology of value, according to which moral facts are real and non-natural, and are given as self-evident to a distinct human faculty of moral intuition (see Moral realism; Intuitionism in ethics). It is in fact compatible with a wide range of philosophical positions on these issues, including anti-realism and naturalism. Some philosophers have argued that the diversity of our moral concepts is a distinctive feature of modernity. Alasdair MacIntyre, for example, has claimed that the plurality of conflicting considerations which make up the moral lexicon of the modern agent is a sign of cultural decay. Modern morality is for MacIntyre a congeries of concepts which have been inherited by modern agents from past forms of life, but which have been torn from the coherent concrete human practices within which they originated, and in the context of which alone they have any real meaning. Moral pluralism is therefore in his view a symptom of our moral discomfiture. Moral pluralism has also been challenged by defenders of classical single-principle moral theories. Yet there have also been signs of theoretical rapprochements. Indeed, many modern consequentialists are abandoning the simple view of human wellbeing embodied in classical utilitarianism in favour of more multifaceted accounts (see Consequentialism). And many deontologists can be read as formulating rational priority rules ranking deontological constraints over other types of moral considerations, rather than as banishing the latter completely from the realm of the moral (see Deontological ethics). 2. Relativism and moral dilemmas It is important to distinguish moral pluralism from a thesis with which it has too often been confused, namely that of moral relativism. A relativist claims that the truth of moral judgments is relative to the conventions of the social group (or even to the individual whim) of the person issuing the judgment, and that these conventions or whims are not themselves subject to any further criterion of adequacy. There are therefore according to relativists no rational constraints on what can count as a moral value, and it is therefore senseless in their view to speak of the truth, falsity or justification of moral judgments (see Moral relativism). Moral pluralism in contrast holds that while the variety of moral principles applying to human beings is irreducible, it is not infinite. Rather, there are constraints on what can count as a moral value (and there is therefore sense in speaking of moral truth and falsity). These constraints might, for example, have to do with the (inherently diverse but not infinite) forms which human wellbeing and flourishing can take. Moral pluralism is therefore compatible with the existence of rational constraints upon moral thought.But the fact that a number of statements to do with moral value might all be true while apparently recommending incompatible actions, and that real, as opposed to merely apparent, moral dilemmas emerge as a real possibility, has been thought by some philosophers to disqualify it as a theory of value by making it sin against basic axioms of deontic logic. These include the principle that 'ought implies can' and the principle of agglomeration, which states that if I ought to do A and I ought to do B, then I ought to do A and B. This troubling apparent consequence of moral pluralism must, however, be qualified by a number of observations. First, as Michael Stocker has observed (1990), the premise that statements about moral value are always act evaluations is an unvindicated assumption of much modern moral theory, yet moral pluralism only leads to moral dilemmas if this assumption is granted. There might be a number of true moral descriptions of a situation, emphasizing different moral considerations present in it. Any one of these might well on its own give rise to an ought statement, but given the presence of other moral considerations it may give rise only to what Ross has called a prima facie obligation. As the latter are not directly action-guiding, they need not conform to the strictures of deontic logic. Second, certain axioms of deontic logic might actually embody controversial first-order moral propositions. The fact that they conflict with the hypothesis of moral dilemmas does not therefore automatically place the burden of proof upon defenders of the latter. Third, the reality of moral dilemmas need not, as philosophers such as Bernard Williams (1965) have suggested, put paid to all attempts to rationally order our, at times conflicting, moral values. There are reasons supporting both sides of a moral dilemma, and the presence of a moral dilemma, rather than signalling the necessary end of moral inquiry, can point to the need to undertake inquiry into these reasons in a more fine-grained manner. Moral pluralism involves the denial of the existence of a supreme value from which all others might be derived; it does not entail incommensurability, the view that moral considerations cannot be compared and ranked. Thus, for example, there may be rational priority rules allowing us to order the claims of different moral values. 3. Moral pluralism and political philosophy The plurality of moral values can manifest itself in a number of different ways. Most relevantly from the point of view of political philosophy, it can involve the existence within a society of a number of equally acceptable moral forms of life. This form of social pluralism poses a set of challenges for political philosophers, suggesting that there may be no simple way of adjudicating conflicts between adherents of equally admirable moral forms of life, or of engaging in the interpersonal welfare comparisons often seen as necessary for the formulation of theories of distributive justice. Moral pluralism has been seen by many philosophers, including John Rawls (1971), Thomas Nagel and Charles Larmore (1987), as calling for the liberal doctrine of state neutrality, the view that particular conceptions of the good ought not to be invoked in the formulation of public policy. See also: Axiology; Duty; Ideals; Pluralism; Religious Pluralism; Values; Virtues and vices References and further reading Berlin, I. (1969) Four Essays on Liberty, Oxford: Oxford University Press. (The classic twentieth-century statement of moral pluralism.) Larmore, C. (1987) Patterns of Moral Compexity, Cambridge: Cambridge University Press. (Argues that a purely political conception of liberalism flows from the plurality of moral forms of life in a society.) MacIntyre, A. (1984) After Virtue, Notre Dame, IN: Notre Dame University Press, 2nd edn. (Argues that moral pluralism and conflict result from moral concepts no longer being embedded in concrete social forms.) Nagel, T. (1977) 'The Fragmentation of Value', in H.T. Englehardt, Jr. and D. Callahan (eds) Knowledge, Value and Belief, Hastings-on-Hudson, NY: Institute of Society, Ethics and the Life Sciences; repr. in Moral Questions, Cambridge: Cambridge University Press, 1979. (A clear statement of the different sources of moral value.) Rawls, J. (1971) A Theory of Justice, Cambridge, MA: Harvard University Press, 34-40. (A standard modern argument against a form of moral pluralism identified as 'intuitionism'.) Ross, W.D. (1930) The Right and the Good, Oxford: Oxford University Press, 21. (The classic statement of an intuitionist moral pluralism.) Stocker, M. (1990) Plural and Conflicting Values, Oxford: Oxford University Press. (Argues that value pluralism does not threaten the possibility of sound practical reason.) Williams, B. (1965) 'Ethical Consistency', Proceedings of the Aristotelian Society, supplementary vol. 39; repr. in Problems of the Self: Philosophical Papers 1956-72, Cambridge: Cambridge University Press, 1973. (Raises problems for moral reasoning caused by the plurality of values.) From checker at panix.com Sun Jul 17 00:11:21 2005 From: checker at panix.com (Premise Checker) Date: Sat, 16 Jul 2005 20:11:21 -0400 (EDT) Subject: [Paleopsych] Routledge: Michael Smith: Emotivism Message-ID: Michael Smith: Emotivism The Routledge Encyclopedia of Philosophy Emotivists held that moral judgments express and arouse emotions, not beliefs. Saying that an act is right or wrong was thus supposed to be rather like saying 'Boo!' or 'Hooray!' Emotivism explained well the apparent necessary connection between moral judgment and motivation. If people judge it wrong to lie, and their judgment expresses their hostility, then it comes as no surprise that we can infer that they are disinclined to lie. Emotivism did a bad job of explaining the important role of rational argument in moral practice, however. Indeed, since it entailed that moral judgments elude assessment in terms of truth and falsehood, it suggested that rational argument about morals might be at best inappropriate, and at worst impossible. In the early part of the twentieth century, under the influence of logical positivism, a new view about the nature of morality emerged: emotivism (see Logical positivism). Emotivists held that when people say, 'It is wrong to tell lies', they express their hostility towards lying and try to get others to share that hostility with them. Moral claims were thus supposed to be very different from claims expressing beliefs. Beliefs purport to represent the world, and so are assessable in terms of truth and falsehood. Emotions, by contrast, do not purport to represent the world, so moral claims were supposed to elude such assessment (see Analytic ethics ?1; Moral judgment ?1). Judging acts right and wrong was thus rather like saying 'Boo!' and 'Hooray!' Emotivism had evident appeal. It is widely agreed that there is a necessary connection of sorts between moral judgment and motivation. If someone judges telling lies to be wrong then they are motivated, to some extent, not to lie. But what people are motivated to do depends on what they approve of, or are hostile towards, not simply on what they believe (see Moral motivation). Imagine, then, that someone's judgment that telling lies is wrong expressed a belief. In order to know whether they are inclined to lie or not we would then need to know, in addition, whether they approve of, or are hostile towards, telling lies. But we need to know no such thing. Knowing that they judge lying wrong suffices to know that they are disinclined to lie. This fits well with the idea that the judgment itself simply expresses hostility. Emotivism also had its difficulties, however. Though emotivists admitted that rational argument about morals had an important role to play, their view entailed that this role was strictly limited. Since they agreed that less fundamental moral claims are entailed by more fundamental claims along with factual premises, and since they agreed that factual premises could be criticized rationally, they held that less fundamental moral claims must be rationally-based. Someone who judges lying wrong because they think that lies are harmful must, they thought, change their mind on pain of irrationality if shown that lying is harmless. But at the same time they insisted that fundamental moral claims - those that are not so derived like, perhaps, the claim that it is wrong to cause harm - are immune from such rational criticism. This was the so-called 'fact/value gap' (see Fact/value distinction; Logic of ethical discourse). It is unclear whether emotivists were consistent in allowing even this limited role for rational argument, however. * 1 If it is wrong to cause harm and lying causes harm then it is wrong to tell lies * 2 It is wrong to cause harm * 3 Lying causes harm * Therefore, it is wrong to tell lies This argument is valid only if 'It is wrong to cause harm' in premises (1) and (2) means the same thing. If this phrase means different things then there is an equivocation and the argument is straightforwardly invalid. Emotivism entails that someone who asserts (2) expresses hostility towards causing harm. Yet whatever 'It is wrong to cause harm' means in (1), it most certainly does not serve to express such hostility. In (1) the phrase appears in the antecedent of a conditional. Someone who asserts (1) may thus even deny that it is wrong to cause harm. They need therefore have no hostility to express towards causing harm. Philosophers sympathetic to emotivism have tried to rescue it from this objection. There is a real question whether emotivists themselves should ever have been interested in preserving an important role for rational argument about morals, however. If the function of moral judgment is simply to express emotions and arouse like emotions in others then it follows that rational argument is at best one way, and perhaps not a very good way, of achieving these aims. We might be more effective if we distracted people from the facts and used rhetoric, humiliation and brainwashing instead. It is hard to see how emotivists could find fault with the idea that a practice in which the use of such technologies was widespread could still constitute a perfectly proper moral practice. The best emotivists could say at this point was, 'Boo for persuasion and brainwashing! Philosophers who thought this response failed to acknowledge the central and defining role played by rational argument in moral practice concluded that emotivism extracted too high a price for its explanation of the necessary connection between moral judgment and motivation. Subsequent theorists have focused on whether an alternative explanation of the necessary connection is available, one which also accommodates the idea that rational argument plays such a central and defining role. No consensus on this issue has emerged, however. If nothing else, emotivism succeeded in making clear how difficult it is to explain the necessary connection between moral judgment and motivation, together with the idea that rational argument plays a central and defining role in moral practice, if the emotions that cause our actions are assumed to be beyond rational criticism. Much recent work about the nature of morality proceeds by calling this assumption into question. See also: Ayer, A.J.; Expression, artistic; Moral knowledge; Moral realism; Morality and emotions ??1-2; Prescriptivism; Stevenson, C.L. References and further reading Ayer, A.J. (1936) Language, Truth and Logic, London: Gollancz; 2nd edn, 1946, ch. 6. NOTE: (Contains a classic statement of emotivism by a logical positivist.) Blackburn, S. (1984) Spreading the Word, Oxford: Oxford University Press, ch. 6. (Shows how modern versions of emotivism attempt to avoid the problems faced by their ancestor.) Smith, M. (1994) The Moral Problem, Oxford: Blackwell. (Argues that, contrary to the standard assumption, emotions can be rationally criticized. Ch. 2 contains a critical discussion of Ayer's emotivism and more modern versions.) Stevenson, C.L. (1944) Ethics and Language, New Haven, CT: Yale University Press. (Another classic statement of emotivism and explanation of the difference between disagreements about values and disagreements about facts.) Warnock, G. (1967) Contemporary Moral Philosophy, London: Macmillan, ch. 3. (Contains a critical discussion of emotivism.) From checker at panix.com Sun Jul 17 00:12:05 2005 From: checker at panix.com (Premise Checker) Date: Sat, 16 Jul 2005 20:12:05 -0400 (EDT) Subject: [Paleopsych] JPSP: Different Emotional Reactions to Different Groups Message-ID: Different Emotional Reactions to Different Groups: A Sociofunctional Threat-Based Approach to "Prejudice" [INTERPERSONAL RELATIONS AND GROUP PROCESSES] Cottrell, Catherine A.1,2; Neuberg, Steven L.1,3 Journal of Personality and Social Psychology Volume 88(5), May 2005, p. 770-789 [This journal is put out by the American Psychological Association, the same group that publishes _Psychology, Public Policy, and Law_ that featured the Rushton-Jensen article in its June issue. Thanks to Ted for alterting us to this article. [First, some summaries: Scholars: Prejudice a Complex Mechanism Rooted in the Genes http://theoccidentalquarterly.com/news/printer.php?id=1041 Posted on: 2005-07-05 19:45:18 A recent study in the Journal of Personality and Social Psychology (May 2005) defies longstanding social dogma to suggest that prejudice, or an aversion to members of different groups, is genetically based and arose to enable both group and individual survival. Arizona St. University Professor Steven Neuberg and ASU graduate student Catherine Cottrell, in Different Emotional Reactions to Different Groups: A Sociofunctional Threat-Based Approach to Prejudice, describe their study of assessments by 235 European-American students at ASU of possible societal threats posed by nine different groupsactivist feminists, African-Americans, Asian-Americans, European-Americans, fundamentalist Christians, gay men, Mexican-Americans, Native Americans, and nonfundamentalist Christiansand the emotions registered by the students at perceived threats associated with the different groups. Rather than undifferentiated hostility to the other, Neuberg and Cottrell found that different types of threatphysical, ideological, or healthevoked different emotions (fear, anger, disgust). Neuberg interprets these nuances as rooted in real threats that led to an evolutionary response: It was adaptive for our ancestors to be attuned to those outside the group who posed threats such as to physical security, health or economic resources, and to respond to these different kinds of threats in ways tailored to have a good chance of reducing them. Whether Neuberg and Cottrells findings will help to root out the calcified prejudices of such citadels of professed anti-prejudice as the Anti-Defamation League, which continues to proclaim that Hate is learned, remains to be seen. References 1. http://www.physorg.com/news4341.html 2. http://www.asu.edu/news/research/prejudicestudy_053105.htm 3. http://content.apa.org/journals/psp/88/5 4. http://www.adl.org/issue_education/hateprejudice/Prejudice2.asp ------------------ Human prejudice in humans has evolved http://www.physorg.com/news4341.html 5.7.1 Contrary to what most people believe, the tendency to be prejudiced is a form of common sense, hard-wired into the human brain through evolution as an adaptive response to protect our prehistoric ancestors from danger. So suggests a new study published by ASU researchers in the May issue of the Journal of Personality and Social Psychology, which contends that, because human survival was based on group living, "outsiders" were viewed as and often were very real threats. By nature, people are group-living animals a strategy that enhances individual survival and leads to what we might call a tribal psychology, says Steven Neuberg, ASU professor of social psychology, who wrote the study with doctoral student Catherine Cottrell. It was adaptive for our ancestors to be attuned to those outside the group who posed threats such as to physical security, health or economic resources, and to respond to these different kinds of threats in ways tailored to have a good chance of reducing them. Unfortunately, says Neuberg, because evolved psychological tendencies are imperfectly attuned to the existence of dangers, people might react negatively to groups and their members even when they pose no realistic threat. Neuberg and Cottrell had 235 European-American students at ASU think about nine different groups: activist feminists, African-Americans, Asian-Americans, European-Americans, fundamentalist Christians, gay men, Mexican-Americans, Native Americans and nonfundamentalist Christians. The researchers then had the participants rate these groups on the threats they pose to American society (e.g., to physical safety, values, health, etc.) and report the emotions they felt toward these groups (e.g., fear, anger, disgust, pity, etc.). Consistent with the researchers hypotheses, findings revealed that distinct prejudices exist toward different groups of people. Some groups elicited prejudices characterized largely by fear, others by disgust, others by anger, and so on. Moreover, the different flavors of prejudice were associated with different patterns of perceived threat. Follow-up work further shows that these different prejudices motivate inclinations toward different kinds of discrimination, in ways apparently aimed at reducing the perceived threat. Groups seen as posing threats to physical safety elicit fear and self-protective actions, Cottrell says. Groups seen as choosing to take more than they give elicit anger and inclinations toward aggression, and groups seen as posing health threats elicit disgust and the desire to avoid close physical contact. One important practical implication of this research is that we may need to create different interventions to reduce inappropriate prejudices against different groups, Neuberg says. For example, if one is trying to decrease prejudices among new college students during freshman orientation, different strategies might be used for bringing different groups together. Neuberg and Cottrell are adamant to point out that just because prejudices are a fundamental and natural part of what makes us human doesnt mean that learning cant take place and that responses cant be dampened. People sometimes assume that, because we say prejudice has evolved roots, we are saying that specific prejudices cant be changed. Thats simply not the case, Neuberg says. What we think and feel and how we behave is typically the result of complex interactions between biological tendencies and learning experiences. Evolution may have prepared our minds to be prejudiced, but our environment influences the specific targets of those prejudices. Source: Arizona State University -------------------- ASU News > Human prejudice has evolved, say ASU researchers http://www.asu.edu/news/research/prejudicestudy_053105.htm [9]Sharon Keeler, sharon.keeler at asu.edu (480) 965-4012 June 1, 2005 Human prejudice has evolved, say ASU researchers Our environment influences the specific targets of those prejudices and how we act on them Could it be that the tendency to be prejudiced evolved as an adaptive response to protect our prehistoric ancestors from danger? So suggest Arizona State University researchers in a new study in the "Journal of Personality and Social Psychology," in which they contend that, because human survival was based on group living, "outsiders" were viewed as - and often were - very real threats. "By nature, people are group-living animals - a strategy that enhances individual survival and leads to what we might call a `tribal psychology'," says Steven Neuberg, ASU professor of social psychology, who authored the study with doctoral student Catherine Cottrell. "It was adaptive for our ancestors to be attuned to those outside the group who posed threats such as to physical security, health or economic resources, and to respond to these different kinds of threats in ways tailored to have a good chance of reducing them." Unfortunately, says Neuberg, because evolved psychological tendencies are imperfectly attuned to the existence of dangers, people may react negatively to groups and their members even when they actually pose no realistic threat. Neuberg and Cottrell point out that just because prejudices are a fundamental and natural part of what makes us human, that doesn't mean that learning can't take place and that responses can't be dampened. "People sometimes assume that because we say prejudice has evolved roots we are saying that specific prejudices can't be changed. That's simply not the case," Neuberg says. "What we think and feel and how we behave is typically the result of complex interactions between biological tendencies and learning experiences. Evolution may have prepared our minds to be prejudiced, but our environment influences the specific targets of those prejudices and how we act on them." For their study, Neuberg and Cottrell had 235 European American students at ASU think about nine different groups: activist feminists, African Americans, Asian Americans, European Americans, fundamentalist Christians, gay men, Mexican Americans, Native Americans and nonfundamentalist Christians. The researchers then had the participants rate these groups on the threats they pose to American society (e.g., to physical safety, values, health, etc.) and report the emotions they felt toward these groups (e.g., fear, anger, disgust, pity, etc.). Consistent with the researchers' hypotheses, findings revealed that distinct prejudices exist toward different groups of people. Some groups elicited prejudices characterized largely by fear, others by disgust, others by anger, and so on. Moreover, the different "flavors" of prejudice were associated with different patterns of perceived threat. Follow-up work further shows that these different prejudices motivate inclinations toward different kinds of discrimination, in ways apparently aimed at reducing the perceived threat. "Groups seen as posing threats to physical safety elicit fear and self-protective actions, groups seen as choosing to take more than they give elicit anger and inclinations toward aggression, and groups seen as posing health threats elicit disgust and the desire to avoid close physical contact," says Cottrell. "One important practical implication of this research is that we may need to create different interventions to reduce inappropriate prejudices against different groups," says Neuberg. Keeler, with Marketing & Strategic Communications, can be reached at (480) 965-4012 or (sharon.keeler at asu.edu). -------------- PsycARTICLES - Journal of Personality and Social Psychology - Vol 88, Issue 5 http://content.apa.org/journals/psp/88/5 [Other interesting stuff in this issue, so I'll give the summaries. Let me know if you'd like to get a copy of some specific article.] 1. Counterfactual Thinking and the First Instinct Fallacy. By Kruger, Justin; Wirtz, Derrick; Miller, Dale T. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 725-735 Most people believe that they should avoid changing their answer when taking multiple-choice tests. Virtually all research on this topic, however, has suggested that this strategy is ill-founded: Most answer changes are from incorrect to correct, and people who change their answers usually improve their test scores. Why do people believe in this strategy if the data so strongly refute it? The authors argue that the belief is in part a product of counterfactual thinking. Changing an answer when one should have stuck with one's original answer leads to more "if only . . ." self-recriminations than does sticking with one's first instinct when one should have switched. As a consequence, instances of the former are more memorable than instances of the latter. This differential availability provides individuals with compelling (albeit illusory) personal evidence for the wisdom of always following their 1st instinct, with suboptimal test scores the result. 2. Feeling and Believing: The Influence of Emotion on Trust. By Dunn, Jennifer R.; Schweitzer, Maurice E. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 736-748 The authors report results from 5 experiments that describe the influence of emotional states on trust. They found that incidental emotions significantly influence trust in unrelated settings. Happiness and gratitude--emotions with positive valence--increase trust, and anger--an emotion with negative valence--decreases trust. Specifically, they found that emotions characterized by other-person control (anger and gratitude) and weak control appraisals (happiness) influence trust significantly more than emotions characterized by personal control (pride and guilt) or situational control (sadness). These findings suggest that emotions are more likely to be misattributed when the appraisals of the emotion are consistent with the judgment task than when the appraisals of the emotion are inconsistent with the judgment task. Emotions do not influence trust when individuals are aware of the source of their emotions or when individuals are very familiar with the trustee. 3. Attitude Importance and the Accumulation of Attitude-Relevant Knowledge in Memory. By Holbrook, Allyson L.; Berent, Matthew K.; Krosnick, Jon A.; Visser, Penny S.; Boninger, David S. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 749-769 People who attach personal importance to an attitude are especially knowledgeable about the attitude object. This article tests an explanation for this relation: that importance causes the accumulation of knowledge by inspiring selective exposure to and selective elaboration of relevant information. Nine studies showed that (a) after watching televised debates between presidential candidates, viewers were better able to remember the statements made on policy issues on which they had more personally important attitudes; (b) importance motivated selective exposure and selective elaboration: Greater personal importance was associated with better memory for relevant information encountered under controlled laboratory conditions, and manipulations eliminating opportunities for selective exposure and selective elaboration eliminated the importance-memory accuracy relation; and (c) people do not use perceptions of their knowledge volume to infer how important an attitude is to them, but importance does cause knowledge accumulation. 4. Different Emotional Reactions to Different Groups: A Sociofunctional Threat-Based Approach to "Prejudice". By Cottrell, Catherine A.; Neuberg, Steven L. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 770-789 The authors suggest that the traditional conception of prejudice--as a general attitude or evaluation--can problematically obscure the rich texturing of emotions that people feel toward different groups. Derived from a sociofunctional approach, the authors predicted that groups believed to pose qualitatively distinct threats to in-group resources or processes would evoke qualitatively distinct and functionally relevant emotional reactions. Participants' reactions to a range of social groups provided a data set unique in the scope of emotional reactions and threat beliefs explored. As predicted, different groups elicited different profiles of emotion and threat reactions, and this diversity was often masked by general measures of prejudice and threat. Moreover, threat and emotion profiles were associated with one another in the manner predicted: Specific classes of threat were linked to specific, functionally relevant emotions, and groups similar in the threat profiles they elicited were also similar in the emotion profiles they elicited. 5. Policewomen Acting in Self-Defense: Can Psychological Disengagement Protect Self-Esteem From the Negative Outcomes of Relative Deprivation? By Tougas, Francine; Rinfret, Natalie; Beaton, Ann M.; de la Sablonni?re, Roxane Journal of Personality and Social Psychology. 2005 May Vol 88(5) 790-800 The role of 2 components of psychological disengagement (discounting and devaluing) in the relation between personal relative deprivation and self-esteem was explored in 3 samples of policewomen. Path analyses conducted with the 3 samples revealed that stronger feelings of personal relative deprivation resulted in stronger discounting of work evaluations, which in turn led to devaluing the importance of police work. A negative relation between discounting and self-esteem was observed in all samples. Other related outcomes of disengagement, professional withdrawal and stress, were also evaluated. 6. Self-Esteem and Favoritism Toward Novel In-Groups: The Self as an Evaluative Base. By Gramzow, Richard H.; Gaertner, Lowell Journal of Personality and Social Psychology. 2005 May Vol 88(5) 801-815 The self-as-evaluative base (SEB) hypothesis proposes that self-evaluation extends automatically via an amotivated consistency process to affect evaluation of novel in-groups. Four minimal group studies support SEB. Personal trait self-esteem (PSE) predicted increased favoritism toward a novel in-group that, objectively, was equivalent to the out-group (Study 1). This association was independent of information-processing effects (Study 1), collective self-esteem, right-wing authoritarianism (RWA), and narcissism (Studies 2 and 3). A self-affirmation manipulation attenuated the association between in-group favoritism and an individual difference associated with motivated social identity concerns (RWA) but did not alter the PSE effect (Study 3). Finally, the association between PSE and in-group favoritism remained positive even when the in-group was objectively less favorable than the out-group (Study 4). 7. Having an Open Mind: The Impact of Openness to Experience on Interracial Attitudes and Impression Formation. By Flynn, Francis J. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 816-826 This article considers how Openness to Experience may mitigate the negative stereotyping of Black people by White perceivers. As expected, White individuals who scored relatively high on Openness to Experience exhibited less prejudice according to self-report measures of explicit racial attitudes. Further, White participants who rated themselves higher on Openness to Experience formed more favorable impressions of a fictitious Black individual. Finally, after observing informal interviews of White and Black targets, White participants who were more open formed more positive impressions of Black interviewees, particularly on dimensions that correspond to negative racial stereotypes. The effect of Openness to Experience was relatively stronger for judgments of Black interviewees than for judgments of White interviewees. Taken together these findings suggest that explicit racial attitudes and impression formation may depend on the individual characteristics of the perceiver, particularly whether she or he is predisposed to consider stereotype-disconfirming information. 8. Resilience to Loss in Bereaved Spouses, Bereaved Parents, and Bereaved Gay Men. By Bonanno, George A.; Moskowitz, Judith Tedlie; Papa, Anthony; Folkman, Susan Journal of Personality and Social Psychology. 2005 May Vol 88(5) 827-843 Recent research has indicated that many people faced with highly aversive events suffer only minor, transient disruptions in functioning and retain a capacity for positive affect and experiences. This article reports 2 studies that replicate and extend these findings among bereaved parents, spouses, and caregivers of a chronically ill life partner using a range of self-report and objective measures of adjustment. Resilience was evidenced in half of each bereaved sample when compared with matched, nonbereaved counterparts and 36% of the caregiver sample in a more conservative, repeated-measures ipsative comparison. Resilient individuals were not distinguished by the quality of their relationship with spouse/partner or caregiver burden but were rated more positively and as better adjusted by close friend. 9. Gender Similarities and Differences in Children's Social Behavior: Finding Personality in Contextualized Patterns of Adaptation. By Zakriski, Audrey L.; Wright, Jack C.; Underwood, Marion K. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 844-855 This research examined how a contextualist approach to personality can reveal social interactional patterns that are obscured by gender comparisons of overall behavior rates. For some behaviors (verbal aggression), girls and boys differed both in their responses to social events and in how often they encountered them, yet they did not differ in overall behavior rates. For other behaviors (prosocial), gender differences in overall rates were observed, yet girls and boys differed more in their social environments than in their responses to events. The results question the assumption that meaningful personality differences must be manifested in overall act trends and illustrate how gender differences in personality can be conceptualized as patterns of social adaptation that are complex and context specific. 10. The Factor Structure of Greek Personality Adjectives. By Saucier, Gerard; Georgiades, Stelios; Tsaousis, Ioannis; Goldberg, Lewis R. Journal of Personality and Social Psychology. 2005 May Vol 88(5) 856-875 Personality descriptors--3,302 adjectives--were extracted from a dictionary of the modern Greek language. Those terms with the highest frequency were administered to large samples in Greece to test the universality of the Big-Five dimensions of personality in comparison to alternative models. One- and 2-factor structures were the most stable across variable selections and subsamples and replicated such structures found in previous studies. Among models with more moderate levels of replication, recently proposed 6- and 7-lexical-factor models were approximately as well replicated as the Big Five. An emic 6-factor structure showed relative stability; these factors were labeled Negative-Valence/Honesty, Agreeableness/Positive Affect, Prowess/Heroism, Introversion/Melancholia, Even Temper, and Conscientiousness. ---------------------------- Journal of Personality and Social Psychology Volume 88(5), May 2005, p. 770-789 Different Emotional Reactions to Different Groups: A Sociofunctional Threat-Based Approach to ?Prejudice? [INTERPERSONAL RELATIONS AND GROUP PROCESSES] Cottrell, Catherine A.1,2; Neuberg, Steven L.1,3 1Department of Psychology, Arizona State University. 2 Correspondence concerning this article should be addressed to Catherine A. Cottrell, Department of Psychology, Arizona State University, Tempe, AZ 85287-1104. E-mail: catherine.cottrell at asu.edu 3 Correspondence concerning this article should be addressed to Steven L. Neuberg, Department of Psychology, Arizona State University, Tempe, AZ 85287-1104. E-mail: steven.neuberg at asu.edu Outline * Abstract * A Sociofunctional Approach * The Goal Relevance of Discrete Emotions * From Group-Relevant Threats to Discrete Emotions * Hypotheses * Other Contemporary Emotion- and Threat-Based Approaches to Prejudice * Method * Participants * Procedure * Measures * Affective Reactions * Threat Perceptions * Results * Composite Scores and Difference Scores * Tests of Hypotheses * Hypothesis 1: Different Groups Can Evoke Qualitatively Different Profiles of Emotional Reactions * Hypothesis 2: Measures of Prejudice as Traditionally Conceived Will Often Mask the Variation Across Groups in Evoked Emotion Profiles * Hypothesis 3: Different Groups Can Evoke Qualitatively Different Profiles of Perceived Threats * Hypothesis 4: General Measures of Threat Will Often Mask the Variation Across Groups in Evoked Threat Profiles * Hypothesis 5: Profiles of the Specific Threats Posed by Different Groups Will Reliably and Systematically Predict the Emotion Profiles Evoked by These Groups * Multiple regression approach * Cluster analytic approach * Discussion * Contributions of the Present Data * Related Theoretical Perspectives * Specificity of Emotion, Specificity of Threat * Alternative Appraisal Theories * Theoretical Breadth * Closing Remarks * References We offer special thanks to Terrilee Asher and the members of the Friday afternoon research seminar for their contributions to the early development of these ideas, and to Eliot Smith, Jon Maner, Aaron Taylor, and Amy Cuddy for their helpful suggestions and comments on previous versions of this article. Received Date: January 9, 2004; Revised Date: August 3, 2004; Accepted Date: September 15, 2004 Abstract The authors suggest that the traditional conception of prejudice?as a general attitude or evaluation?can problematically obscure the rich texturing of emotions that people feel toward different groups. Derived from a sociofunctional approach, the authors predicted that groups believed to pose qualitatively distinct threats to in-group resources or processes would evoke qualitatively distinct and functionally relevant emotional reactions. Participants' reactions to a range of social groups provided a data set unique in the scope of emotional reactions and threat beliefs explored. As predicted, different groups elicited different profiles of emotion and threat reactions, and this diversity was often masked by general measures of prejudice and threat. Moreover, threat and emotion profiles were associated with one another in the manner predicted: Specific classes of threat were linked to specific, functionally relevant emotions, and groups similar in the threat profiles they elicited were also similar in the emotion profiles they elicited. Jews are shrewd, religious, and wealthy. African Americans are noisy, athletic, and ?have an attitude.? Italians are loyal to family, loud, and tradition loving. And the Irish are talkative, happy-go-lucky, and quick tempered. These stereotypes, recently endorsed by American college students (Madon et al., 2001 ), straightforwardly demonstrate that people hold different beliefs about different groups. Researchers have long recognized this and have been documenting since the 1930s the diversity of stereotypes used to describe different groups (e.g., Devine & Elliot, 1995; Gilbert, 1951; Karlins, Coffman, & Walters, 1969; Katz & Braly, 1933; Niemann, Jennings, Rozelle, Baxter, & Sullivan, 1994). Researchers have seemingly been less interested, however, in the diversity of people's feelings toward different groups. Although Allport (1954) noted that negative prejudice can include specific ?feelings of scorn or dislike, of fear or aversion? (p. 7), his own theorizing focused more on his macroscopic characterization of negative prejudice as an unfavorable feeling toward a group and its members. This latter conceptualization of prejudice, as a general attitude or evaluation, has long dominated the research literature and has been the focus of most theoretical and empirical approaches designed to explicate the origins, operations, and implications of intergroup feelings (for a review, see Brewer & Brown, 1998 ). As useful as this global view of prejudice has been, we believe there is great value in contemplating seriously Allport's more textured observation?that just as people may hold qualitatively distinct beliefs about different groups, they may feel qualitatively distinct emotions toward different groups. A small set of researchers has begun to explore this possibility (e.g., Brewer & Alexander, 2002; Dijker, 1987; Esses, Haddock, & Zanna, 1993; Fiske, Cuddy, Glick, & Xu, 2002; Mackie, Devos, & Smith, 2000 ); we review their approaches below. Our own belief in the importance of understanding the textured emotional reactions people have toward members of other groups emerges as an implication of a broader ?sociofunctional? approach we have been developing to better account for a range of intragroup and intergroup phenomena (e.g., Neuberg, Smith, & Asher, 2000).1 To anticipate our argument, we suggest that the specific feelings people have toward members of other groups should depend on the specific tangible threats they see these other groups as posing: From qualitatively different threats should emerge qualitatively different, and functionally relevant, emotions. From this perspective, the concept of prejudice as general attitude is inherently problematic: Because the traditional prejudice construct aggregates across qualitatively different emotional reactions (e.g., anger, fear, disgust, pity, admiration, guilt)?each with its often distinct eliciting conditions, phenomenologies, facial expressions, neurologic structures, physiological patterns, and correlated behavioral propensities?it may obscure the rich texturing of emotional reactions people have toward different groups. Consequently, an exclusive focus on this traditional conceptualization of prejudice is likely to hinder the development of effective theory and practical intervention. A Sociofunctional Approach By their nature, people are group-living animals. According to many anthropologists, environmental challenges present in our evolutionary past propelled ancestral humans toward life in highly interdependent and cooperative groups (e.g., Leakey & Lewin, 1977). This ?ultrasociality? (Campbell, 1982), ?hypersociality? (Richerson & Boyd, 1995), or ?obligatory interdependence? (Brewer, 2001 ) likely evolved as a means to maximize individual success: An individual was presumably able to gain more essential resources (e.g., food, water, shelter, mates) and achieve more important goals (e.g., child rearing, self-protection) by living and working with other individuals in the context of a group compared with living and working by oneself. Interdependent group living, then, can be seen as an adaptation?perhaps the most important adaptation (Barchas, 1986; Brewer, 1997; Brewer & Caporael, 1990; Leakey, 1978 )??designed? to protect the human individual from the environment's many dangers while also supporting the effective exploitation of the environment's many opportunities.2 Group life has its costs, however (e.g., R. D. Alexander, 1974; Dunbar, 1988 ). For instance, group living surrounds one with individuals able to physically harm fellow group members, to spread contagious disease, or to ?free ride? on their efforts. A commitment to sociality thus carries a risk: If threats such as these are left unchecked, the costs of sociality will quickly exceed its benefits. Thus, to maximize the returns on group living, individual group members should be attuned to others' features or behaviors that characterize them as potential threats. We note two distinct levels at which group members may threaten each other. The benefits of group living depend not merely on the presence of others but on the effective coordination of these individuals into a well-functioning group. Individual group members should thus be attuned not only to those features and behaviors of others that heuristically characterize them as direct threats to one's personal success but also to those features and behaviors of others that heuristically characterize them as threats to group success, which are our focus here. This latter sensitivity to group-directed threats should be especially acute for those highly invested in, and dependent on, their groups. What events signal to individuals that the functioning of their group may be compromised? Because groups enhance individual success by providing members with valuable resources, members should be attuned to potential threats to group-level resources such as territory, physical security, property, economic standing, and the like. They should also be attuned to those group structures and processes that support the group's operational integrity?to those structures and processes that encourage effective and efficient group operations. Effective groups tend to possess strong norms of reciprocity, trust among members, systems of effective communication, authority structures for organizing individual effort and distributing group resources, common values, mechanisms for effectively educating and socializing members, members with strong in-group social identities, and the like (e.g., Brown, 1991 ). Individual group members should thus be especially attuned to potential threats to reciprocity (because others are either unwilling or unable to reciprocate), trust, value systems, socialization processes, authority structures, and so on (Neuberg et al., 2000 ). Finally, mere attunement to threats cannot be enough: Vigilance must be accompanied by psychological responses that function to minimize?or even eliminate?recognized threats and their detrimental effects. In sum, the sociofunctional approach is based on three simple, but fundamental, propositions: (a) Humans evolved as highly interdependent social beings; (b) effectively functioning groups tend to possess particular social structures and processes; and (c) individuals possess psychological mechanisms ?designed? by biological and cultural evolution to take advantage of the opportunities provided by group living and to protect themselves from threats to group living. Ongoing research has used this approach to successfully predict the traits people most value for members of different social groups and the impressions of themselves they most want to present to others, to generate hypotheses regarding the nature of gossip and other forms of communicated social information, and to motivate explorations of similarities in formal systems of social control across religious and criminal justice systems (e.g., Cottrell & Neuberg, 2004; Cottrell, Neuberg, & Li, 2003; Neuberg & Story, 2003 ). Here we use the sociofunctional approach, in conjunction with theory and empirical findings on the goal-relevance of discrete emotions, to generate specific predictions about the threat-driven nature of intergroup affect. The Goal Relevance of Discrete Emotions Emotions are critical to the natural goal-seeking process. They signal the presence of circumstances that threaten or profit important goals (e.g., Carver & Scheier, 1990; Ekman & Davidson, 1994; Higgins, 1987; Simon, 1967) and direct and energize behavior toward the remediation of such threats or the exploitation of such benefits (e.g., Cosmides & Tooby, 2000; Ekman, 1999; Nesse, 1990; Plutchik, 1980, 2003; Tooby & Cosmides, 1990 ). Emotions organize and coordinate ongoing psychological action (e.g., attention, motivation, memory, behavioral inclinations) so that people might respond more effectively to events related to individual survival and success. There is a functional specificity to the emotional system: Different events evoke different emotions. A shadowy figure quickly emerging from a dark alley?a problem related to personal security?elicits fear, whereas the theft of one's car?a problem related to personal resources?elicits anger. Moreover, distinct emotions are affiliated with specific physiological, cognitive, and behavioral tendencies, all of which operate to facilitate resolution of the problem. For example, the fear felt toward the unfamiliar figure triggers psychological and physical activity aimed at promoting escape from the potentially threatening situation, whereas the anger felt toward the property thief triggers activity aimed at promoting retrieval of the lost goods. Emotions researchers have theorized about the perceived stimulus event classes that elicit qualitatively distinct emotions and action tendencies (e.g., Ekman & Friesen, 1975; Frijda, 1986; Izard, 1991; Lazarus, 1991; Nesse, 1990; Plutchik, 1980) and have arrived at some consensus. Table 1 highlights the links among perceived stimulus event classes, discrete emotions, action tendencies, and resulting functional outcomes for an illustrative set of emotions. For example, perceiving the obstruction of valuable goals or the taking of valuable resources produces anger and a tendency to aggress, perceiving physical or moral contamination produces disgust and a tendency to expel the contaminated object or idea, and perceiving a threat to physical safety produces fear and a tendency to flee. These first three emotions?anger, disgust, and fear?are often considered basic emotions, shaped by natural selection to automatically address recurrent survival-related problems (Ekman, 1999). Table 1 An Evolutionary Approach to Emotions Pity, envy, and guilt, on the other hand, involve more complex cognitive appraisals of social situations. These emotional reactions nonetheless progress the individual toward important adaptive outcomes. Pity (as part of the sympathy family of emotions) is hypothesized to be an important emotional response involved in the regulation of the human altruistic system (Trivers, 1971 ), because it may motivate prosocial behavior toward others who are temporarily disadvantaged for reasons beyond their control, thereby generating gratitude from the recipient and subsequent reciprocity of the assistance back to the helper in the future. Envy results from feelings of being deprived of valuable resources possessed by another and produces a tendency to obtain the desired objects (Lazarus, 1991; Parrott, 1991 ), thereby encouraging individuals to pursue limited important resources. Guilt is produced by the belief that one has engaged in a moral transgression that has harmed another (especially a perceived in-group member) and elicits an inclination toward reconciliatory behavior (Lazarus, 1991 ). Like pity, guilt may also be important to the maintenance of reciprocal relations: Guilt may motivate the wrongdoer to compensate for the harm caused and to follow appropriate rules of reciprocal exchange in the future (Trivers, 1971). From Group-Relevant Threats to Discrete Emotions The more basic, ?lower brain? emotions did not evolve for the purpose of helping humans manage the threats and opportunities of sociality. Although one must be wary of attributing emotional states to other animals, fear, anger, and disgust, for example, appear to exist in creatures with an evolutionary history much longer than humans' and in species that are barely social (e.g., Izard, 1978; ?hman, 1993; Rozin, Haidt, & McCauley, 1993 ). Evolution, however, often exploits existing adaptations for other purposes. For example, the infant attachment system may have been co-opted by natural selection to encourage romantic attachment between mates and thus enhance the survival and success of offspring (Shaver, Hazan, & Bradshaw, 1988 ). Because humans have long been ultrasocial, these valuable emotion-based psychological mechanisms likely became used by natural selection for the additional purpose of helping people protect valuable group resources and maintain the integrity of critical social structures and processes. Just as the theft of an individual's property will evoke anger, so too should the theft of a group's property?particularly among those group members highly invested in and dependent on the group. Other emotions, in contrast, may have indeed evolved to help social animals manage the complexities of the repeated, relatively stable interdependence that characterizes social life. For instance, unlike fear, anger, and disgust, the emotions of pity, guilt, empathy, embarrassment, and shame are inherently social and have as cognitive antecedents relatively complex appraisals that explicitly involve actual, imagined, or implied others (e.g., Lewis, 1993 ). Although these emotions likely evolved in the service of managing dyadic social relations, they too may have been easily exploited by natural selection for the additional purpose of managing group and intergroup relations. Because human sociality developed to help individuals gain important tangible resources (e.g., food, shelter, mates), we expect individuals to be most attuned to threats to in-group success when there are tangible outcomes at stake. These emotion-based psychological systems should therefore operate most powerfully within interactions between groups perceived to be mutually interdependent, that is, cooperating or competing to obtain valued tangible outcomes (e.g., as in interactions between White and Black Americans). These threat-emotion systems may operate less prominently within interactions between groups defined primarily by divergent identities alone (e.g., interactions between Honda and Toyota owners). Integrating, then, the emotions research summarized in Table 1 and our understanding of the fundamental structures and processes underlying effective group operation, we have generated explicit predictions regarding the links between specific threats to the effective functioning of groups (and the more general classes of threat they represent) and the specific emotions they evoke; we present the predictions emerging from this threat-based appraisal framework in Table 2. Table 2 Hypothesized Theoretical Connections Between Perceived Threats to the In-Group and Elicited Primary and Secondary Emotions Anger is elicited when people confront obstacles and barriers to their desired outcomes, suggesting that intergroup anger is likely to occur when an out-group is seen to gain in-group economic resources (e.g., jobs), seize or damage in-group physical property (e.g., homes), diminish the freedoms and rights provided to in-group members, choose not to fulfill reciprocal relations with the in-group, interfere with established in-group norms and social coordination, or betray the in-group's trust. As indicated in Table 2 , this anger may then spur individuals to engage in functionally appropriate aggressive behaviors aimed at removing the specific perceived obstacle. Moreover, because all intergroup threats, in the most basic sense, obstruct a desired outcome (e.g., physical safety, good health, rewarding reciprocal relations), we hypothesize that anger may be a secondary emotional reaction to an out-group perceived to carry a contagious physical illness, promote values opposing those of the in-group, endanger the in-group's physical safety, neglect a reciprocity-based relationship because of inability, or threaten the in-group's moral standing. Whether immediate or subsequent, then, we suggest that anger will accompany nearly all perceptions of out-group threat (Neuberg & Cottrell, 2002). Disgust is elicited when people encounter a physical or moral contaminant, suggesting that intergroup disgust is likely to occur when an out-group is thought to carry a contagious and harmful physical illness or when an out-group promotes values and ideals that oppose those of the in-group. This disgust may then motivate qualitatively distinct actions aimed at minimizing the physical or moral contamination. Because threats to personal freedoms and reciprocity relations (by choice) imply that an out-group may promote values that oppose those of the in-group, we hypothesize that disgust may be a secondary emotional reaction to an out-group seen to intentionally limit the in-group's personal freedoms or violate the rules of reciprocal exchange. Fear (and its associated tendencies toward self-protective behavior) should predominate when others are perceived to threaten the group's physical safety. We furthermore hypothesize that fear may be a secondary emotional reaction to an out-group perceived to obtain in-group economic resources, seize or damage in-group property, interfere with in-group social coordination, or betray trust relations with the in-group, because each of these obstacle threats signals potential uncertainty for future well-being. Because physical and moral contamination may also heighten insecurity about the future well-being of in-group members (especially susceptible individuals), fear may also be elicited secondarily by perceived threats to group health or group values. Pity should predominate when others, particularly those potentially existing within an extended in-group, are distressed because they are unable to maintain a reciprocity-based relationship for reasons outside their control (i.e., inability); this may impel prosocial behavior focused on increasing the likelihood that others may be able to meet reciprocity-based obligations in the future. In addition, pity may occur as a secondary emotional reaction to a perceived threat to group health if the diseased others are not held responsible for contracting or passing along their affliction (e.g., Dijker, Kok, & Koomen, 1996; Weiner, Perry, & Magnusson, 1988). Guilt should predominate when an out-group, suffering because of actions of the perceiver's group, is believed to threaten the moral standing of the perceiver's group. After committing such image-damaging moral transgressions, individuals may then behave in ways to validate the in-group's position as good and moral (e.g., Branscombe, Doosje, & McGarty, 2002; Lickel, Schmader, & Barquissau, 2004 ). Finally, envy should occur as a secondary emotional reaction to others who acquire the in-group's economic resources, because these others now possess a desirable object or opportunity that the in-group lacks. Hypotheses From the above considerations we have derived five general hypotheses: Hypothesis 1: Different groups can evoke qualitatively different profiles of emotional reactions. To the extent that different groups can be seen to pose different patterns of threats?see below?they should evoke different profiles of emotional reactions.3 Hypothesis 2: Measures of prejudice as traditionally conceived will often mask the variation across groups in evoked emotion profiles. Because of its conceptualization as a general attitude or evaluation, the traditional measurement of prejudice can obscure the qualitatively distinct emotional responses people have to different groups. This hypothesis will be supported if different groups elicit similar levels of general prejudice but distinct emotion profiles. Hypothesis 3: Different groups can evoke qualitatively different profiles of perceived threats. Different groups may be perceived to threaten group-level resources and group integrity in different, and multiple, ways: Some may seize our territory and advocate values and principles incompatible with those we cherish; others may carry infectious diseases and fail to contribute their share to the common good. Such groups should elicit distinct threat profiles. Hypothesis 4: General measures of perceived threat will often mask the variation across groups in evoked threat profiles. Just as general measures of prejudice may obscure differentiated emotional reactions to groups, general measures of perceived threat may conceal differentiated threats ostensibly posed by different groups. This hypothesis will be supported if different groups elicit similar levels of general threat but distinct threat profiles. Hypothesis 5: Profiles of the specific threats posed by different groups will reliably and systematically predict the emotion profiles evoked by these groups. If our analysis is correct, profiles of emotional reactions should emerge naturally from profiles of threat perceptions, as articulated in Table 2. This hypothesis will be supported if we can demonstrate a systematic link between the observed threat and emotion profiles. Other Contemporary Emotion- and Threat-Based Approaches to Prejudice We are not alone in recognizing the importance of moving beyond the traditional view of prejudice as a general attitude (for a review, see Mackie & Smith, 2002). Moreover, others have explicitly explored the concept of intergroup threat to tangible resources (e.g., LeVine & Campbell, 1972; Sherif, 1966; Stephan & Renfro, 2002 ). We briefly review these alternative approaches to clarify important points of overlap with our sociofunctional approach as well as to highlight some of the unique contributions made by the current research. Esses and her colleagues (Esses & Dovidio, 2002; Esses, Haddock, & Zanna, 1993; Haddock, Zanna, & Esses, 1993 ) have assessed the discrete emotional reactions (e.g., fear, anger, disgust), stereotypes (e.g., friendly, lazy), symbolic beliefs (e.g., ?promote religious values,? ?block family values?), and general attitudes (i.e., prejudice) associated with assorted ethnic and social groups (e.g., French Canadians, Blacks, homosexuals). To explore the associations among these constructs for each group, these researchers combined the valence and frequency of each reaction to create a single, aggregate indicator for each construct. Although an appropriate strategy given their theoretical interests, such aggregations precluded the possibility of assessing within their samples whether prejudice (as a general attitude) obscured the presence of differing emotion profiles for their different target groups and whether aggregated symbolic beliefs (constituting, perhaps, one form of threat) obscured the presence of differing symbolic threat profiles for their different target groups. Thus, although their data are potentially useful for exploring Hypotheses 1 and 2, in particular, and Hypotheses 3?5 to a substantially lesser extent, their analyses do not provide such tests. In an examination of prejudice against ethnic out-groups, Dijker and his colleagues (Dijker, 1987; Dijker, Koomen, van den Heuvel, & Frijda, 1996 ) assessed the emotional reactions native Dutch people experience toward different ethnic minorities (e.g., Surinamese, Turkish, and Moroccan immigrants). They, too, aggregated over discrete emotions to create, on the basis of exploratory factor analyses, four affect categories (i.e., positive mood, anxiety, irritation, concern). Despite this partial aggregation?and the difficulty it causes for rigorously testing Hypothesis 1?their findings nonetheless suggest the importance of considering specific emotions when exploring intergroup affect (e.g., Surinamese, but not Turks or Moroccans, evoked anxiety). Moreover, their data also suggest that certain threats may be more strongly associated with some emotional responses than others (e.g., the perception of danger was associated with anxiety more often than with irritation or worry), a finding consistent with Hypothesis 5. Thus, although far from a systematic and thorough test of our hypotheses, Dijker and colleagues' findings do lend them some support. The stereotype content model (Fiske et al., 2002; Fiske, Xu, Cuddy, & Glick, 1999 ) posits that people experience distinct emotions toward groups perceived to differ on the dimensions of warmth and competence?pity toward high-warmth but low-competence groups, envy toward low-warmth but high-competence groups, admiration toward high-warmth and high-competence groups, and contempt toward low-warmth and low-competence groups. With respect to numerous ethnic, political, religious, and social groups within America, these researchers did indeed observe the predicted differentiated emotional reactions to groups, consistent with Hypothesis 1. We note, however, that (a) their four emotion clusters aggregate across emotions typically believed to be discrete (e.g., anger and disgust are both in the cluster labeled ?contempt?), (b) other fundamental emotions (for example, fear) were never analyzed because they failed to fit cleanly into one of these four empirically driven clusters, and (c) the categorical nature of their framework (and accompanying analysis strategy) does not suggest the conceptual possibility that different groups elicit multiple emotions in different configurations (i.e., emotion profiles). As a consequence, the findings from this approach likely underestimate the diversity of emotional reactions people have to different groups; we present evidence suggesting this very point below. Moreover, the aims of these researchers were different than ours, and so we are not able to use their data to test our Hypotheses 2?5. Intergroup emotions theory (IET; Devos, Silver, Mackie, & Smith, 2002; E. R. Smith, 1993, 1999; Mackie et al., 2000 ) arises from the melding of social identity and self-categorization theories, on the one hand, with appraisal theories of emotions, on the other. As with our approach, IET posits that people experience a diversity of discrete intergroup emotions toward different groups. In particular, when social identities are salient, individuals interpret situations in terms of harm or benefit for one's own group and experience specific emotions as suggested by assorted appraisal theories of emotion (they cite Frijda, 1986; Roseman, 1984; Scherer, 1988; C. A. Smith & Ellsworth, 1985 ). The predictions generated from IET will overlap with the predictions derived from our own framework to the extent that it uses a similar, functionally grounded theory of discrete emotions (which it appears to do) and a similar threat-based appraisal system (which is unclear); indeed, we suspect that the five hypotheses proposed here would be seen by IET proponents as consistent with that approach. Empirically, however, E. R. Smith, Mackie, and their colleagues (Devos, Silver, Mackie, & Smith, 2002; E. R. Smith, 1993, 1999; Mackie et al., 2000 ) have limited their explorations to the emotions of anger and fear, within the context of having experimental participants imagine interacting with groups designed to differ in the strength of threat they posed to participant in-groups (e.g., individuals valuing social order vs. freedom; fellow students at one's university). To this point, then, the data generated by IET researchers do not test our Hypotheses 1?4 and provide only a partial test of Hypothesis 5. According to image theory (M. G. Alexander, Brewer, & Herrmann, 1999; Brewer & Alexander, 2002 ), specific configurations of appraisals on the dimensions of intergroup competition, power, and status give rise to differentiated emotional reactions (e.g., anger, fear, envy), cognitive images (e.g., out-group as enemy, barbarian, or imperialist), and action tendencies (e.g., attack, defend, rebel). This perspective is compatible with ours in its aim to link specific threats to specific emotions, although image theory focuses more on the sociostructural relations from which different threats and opportunities emerge, whereas we focus more on particular threats and opportunities per se. Recent empirical work examining relations among White and Black American high school students supports the image theory notion that differentiated emotional reactions are indeed associated with different out-group images (Brewer & Alexander, 2002 ). The findings of these researchers are thus compatible with our Hypotheses 1, 3, and 5, although we note that their categorical scheme, like that of stereotype content theory, does not straightforwardly account for the possibility that different groups elicit multiple emotions in different configurations (i.e., that they may elicit different emotion profiles). Finally, the revised integrated threat theory (Stephan & Renfro, 2002 ) emphasizes the importance of threat for understanding prejudice. Revised integrated threat theory posits that four umbrella categories of constructs?realistic threats to the in-group, symbolic threats to the in-group, realistic threats to the individual, and symbolic threats to the individual?cause negative psychological (e.g., prejudice) and behavioral (e.g., aggression) reactions to groups thought to pose such threats. This perspective focuses on a relatively small number of tangible threats, however, and like realistic conflict theories before it (e.g., LeVine & Campbell, 1972; Sherif, 1966 ) makes no claims as to how different specific threats would elicit distinct, specific emotions. Thus, the data generated by this approach are potentially relevant only to our Hypothesis 3. Thus, although there exist clear points of convergence between our sociofunctional approach and these other perspectives, the points of divergence are also significant; we further compare the alternative approaches below. Moreover, note that none of the empirical work emerging from these approaches has explicitly tested Hypotheses 2 and 4?that general measures of prejudice and threat may actually mask across-group differences in emotion and threat profiles?or has tested Hypotheses 3 and 5 in a comprehensive manner. To test our hypotheses and to provide a uniquely rich data set useful for beginning the process of empirically differentiating among approaches, we presented participants with an assortment of ethnic, religious, and ideological groups within the United States and inquired about (a) the specific emotional reactions they have toward these groups, (b) the general feeling (i.e., prejudice) they have toward these groups, (c) the specific threats they perceive these groups as posing, and (d) the general threat they perceive these groups as posing. We predicted that different groups would elicit different profiles of discrete emotions and threats (Hypotheses 1 and 3); that differentiations among these emotion and threat profiles would often be effectively masked by simple valence-based measures of prejudice and threat (Hypotheses 2 and 4); and that there would be systematic, functional links between specific threats and specific emotions, as articulated in Table 2 (Hypothesis 5). Two hundred thirty-five European American undergraduate students participated. They were, on average, 20.60 years old (SD = 3.53), predominantly female (63%), and self-identified as mainstream Christian (51%). The majority (64%) were recruited from upper division psychology classes and received extra credit in exchange for their participation. The remainder were recruited from the introductory psychology subject pool and received required course credit in exchange for their participation. Participants from upper division psychology courses completed the questionnaire packets out of the classroom, on their own time. Questionnaire packets were distributed to the introductory psychology participants in small groups in the laboratory; they completed the items at their own pace. Presentation of the affective response and threat perception items for each group was counterbalanced across all participants. Presented in one of 10 random orders, participants rated a set of nine groups: activist feminists, African Americans, Asian Americans, European Americans, fundamentalist Christians, gay men, Mexican Americans, Native Americans, and nonfundamentalist Christians. Because we expected few threats and little threat-related emotion to be associated with one's own groups, the participants' ethnic in-group (European Americans) and modal religious in-group (nonfundamentalist Christians) were included to serve as baselines for comparison with the other groups. We selected the additional target groups because (a) our European American participants in the American Southwest likely perceive themselves to be involved with these groups in mutually interdependent relationships involving tangible outcomes, and (b) common stereotypes suggest that these groups might be seen to pose a range of different threats?a requirement if we were to appropriately test our hypotheses. To wit, we suspected that activist feminists, fundamentalist Christians, and gay men would be seen as threatening the values and personal freedoms of our student sample and in somewhat different ways; that gay men would be seen as posing a threat to health (via a perceived association with HIV/AIDS); that Asian Americans would be seen as posing an economic threat; that African Americans and Mexican Americans would be viewed as posing physical safety, property, and reciprocity (by choice and inability) threats; and that Native Americans would be viewed as posing threats to reciprocity (by inability). Note that the test of our hypotheses does not depend on whether we are correct in the above presumptions of which groups are associated with particular threats. Indeed, we could be entirely wrong in the threats we expect each group to pose but receive perfect support for our hypotheses?if the emotions elicited by a group are those that map as predicted onto the threats that group is actually perceived by our participants to pose. However, we were confident?on the basis of past research (e.g., Cottrell, Neuberg, & Asher, 2004; Devine & Elliot, 1995; Esses & Dovidio, 2002; Haddock et al., 1993; Haddock & Zanna, 1994; Hurh & Kim, 1989; Yee, 1992 )?that the collection of groups selected would provide enough variation in perceived threats to enable an adequate test of our hypotheses. To assess affective responses to the selected groups, participants reported the extent to which they experienced each feeling when thinking about a particular group and its members (1 = Not at all, 9 = Extremely ). To assess overall positive evaluation, participants reported the degree to which they liked and felt positive toward each group; to assess overall negative evaluation, participants reported the extent to which they disliked and felt negative toward each group. In addition, we measured 13 emotional reactions with two items each. Some of these emotions were selected because of their straightforward relevance to our theory (see Table 2 )?anger, disgust, fear, pity, envy, and guilt?or because they were longer lasting but less intense instantiations of these (i.e., resentment, anxiety). Others were included merely to provide participants with a broader emotional judgment context (i.e., respect, happiness, hurt, sadness, pride, security, and sympathy). All participants completed these affective response items in the same random order for all groups. Threat Perceptions To assess perceived threats associated with the selected groups, participants indicated the extent to which they agreed with statements regarding the general and specific threats that each group poses to American citizens and society (1 = Strongly Disagree, 9 = Strongly Agree ). To assess general threat, participants reported the extent to which each group was dangerous and posed a threat to American citizens. To assess specific threats relevant to our sociofunctional approach (see Table 2 ), participants reported the extent to which they believed the target group threatened jobs and economic opportunities, threatened personal possessions, threatened personal rights and freedoms, violated reciprocity relations by choice, threatened social coordination and functioning, violated trust, threatened physical health, held values inconsistent with those of the in-group, endangered physical safety, and violated reciprocity relations because of a lack of ability.4 Two items were included to measure each of these 10 threats. All participants completed the 2 general threat items followed by the 20 specific threat items in a random arrangement. Composite Scores and Difference Scores As described, all participants completed two items designed to assess each emotion and threat construct. These a priori item pairs correlated highly with one another (all r s > .70), and so we averaged them to create composite scores for each general and specific affective response and for each general and specific threat perceived. Although it is not uncommon for researchers to further aggregate such data on the basis of exploratory factor analyses, we have chosen not to do so on technical and theoretical grounds. Technically, because exploratory factor analysis is a data-driven approach, it runs the risk of capitalizing on chance characteristics in the data and creating unstable and incoherent factor solutions (Conway & Huffcutt, 2003; Fabrigar, Wegener, MacCallum, & Strahan, 1999 ). Theoretically, we believe that the individual threat measures?though correlated with one other?assess distinct categories of threat: Stealing a person's car is not the same as making the person ill or assaulting him or her with a weapon. On similar grounds, as many emotions researchers have emphasized, it is necessary to maintain firm empirical distinctions among our measured emotions: Feeling angry is not the same as feeling disgusted or feeling afraid. Indeed, growing evidence demonstrates that unique universal signals, nervous system responses, and antecedent events differentiate the basic emotions (e.g., anger, disgust, fear; Ekman, 1999 ). This decision to maintain firm distinctions among our threat and emotion constructs is supported by confirmatory factor analyses (CFAs).5 Moreover, if we are incorrect in our belief that these threats and emotions are distinct from one another?if, for example, anger, disgust, and fear functioned identically for our participants?then the predicted textured patterning of perceived threats and emotional patterns would not emerge, and our hypotheses would be disconfirmed. As noted above, our focus is on the potential patterning of threat-related emotions. These reactions better describe the intergroup interactions of interest, and focusing our report on them greatly streamlines the presentation of a large amount of data. We thus created emotion composite scores for the emotion constructs most relevant to our theoretical approach: anger/resentment, disgust, fear/anxiety, pity, and envy.6 To create a measure of overall negative prejudice, we subtracted the positive evaluation composite score for each group from the negative evaluation composite score for that group; higher values on this overall prejudice measure indicate more negative prejudice toward the group. To test Hypotheses 1?4, we used each participant's affect and threat ratings of European Americans as a baseline for comparison against their ratings of the other groups. Thus, we created and analyzed difference scores for each affect and threat by subtracting each participant's affect and threat rating for European Americans from his or her affect and threat rating for each other group. The ratings reported below thus reflect mean difference scores (relative to European Americans) for all participants in our sample. Because all participants were European American, this approach serves to eliminate idiosyncratic differences in participants' tendencies to perceive particular threats and to experience particular emotions and greatly aids with the visual identification and interpretation of affect and threat patterns. Note that our conclusions regarding Hypotheses 1?4 remain unchanged if we instead analyze raw (i.e., nondifference) scores. Tests of Hypotheses Hypothesis 1: Different Groups Can Evoke Qualitatively Different Profiles of Emotional Reactions We conducted a two-way (Target Group ? Emotion Experienced) repeated-measures analysis of variance (ANOVA) on the mean difference emotion ratings; a significant Target Group ? Emotion Experienced interaction would reveal that the emotion profiles do indeed differ across groups. As predicted, this interaction emerged as highly statistically significant, F(28, 6384) = 31.03, p < .00001, partial [eta]2 = .120; Table 3 presents the means and standard deviations for all emotion ratings for all groups. These data provide substantial support for Hypothesis 1. People may indeed report different patterns of emotional experience toward different groups. For the purpose of more clearly illustrating the diversity of emotional response to groups, we highlight participants' affective reactions to two subsets of groups in Figure 1 (African Americans, Asian Americans, and Native Americans) and Figure 2 (activist feminists, fundamentalist Christians, and gay men). Figure 1. Participants' mean affective reactions to African Americans, Asian Americans, and Native Americans, relative to affective reactions to European Americans. A repeated-measures analysis of variance on the emotion ratings for these three groups revealed a significant Target Group ? Emotion Experienced interaction, F(8, 1824) = 50.63, p < .00001, partial [eta]2 = .182, supporting Hypothesis 1: Participants reported different patterns of emotional reactions to these different groups. Figure 2. Participants' mean affective reactions to activist feminists, fundamentalist Christians, and gay men, relative to affective reactions to European Americans. A significant Target Group ? Emotion Experienced interaction, F(8, 1824) = 15.98, p < .00001, partial [eta]2 = .065, emerged in a repeated-measures analysis of variance on the emotion ratings for these three groups, indicating that participants experienced different patterns of emotional reactions to them. Table 3 Means and Standard Deviations of Emotional Reactions (Relative to European Americans) Hypothesis 2: Measures of Prejudice as Traditionally Conceived Will Often Mask the Variation Across Groups in Evoked Emotion Profiles We have just seen that different groups can evoke different patterns of discrete emotions. Hypothesis 2 would be supported if groups that elicit distinct emotion profiles nonetheless elicit similar levels of general prejudice. Such a finding would illustrate that prejudice can mask meaningful patterns of underlying emotions. Indeed, as seen in Table 3 , many groups that differed from one another in the emotion profiles they evoked also evoked comparable degrees of general prejudice. We illustrate this general pattern with the two subsets of groups presented in Figures 1 and 2. As presented in Figure 1 , African Americans, Asian Americans, and Native Americans differed significantly in the emotion profiles they elicited in our participants. Moreover, they each evoked general negative prejudice: Prejudice difference score for African Americans = 1.14, t(228) = 4.74, p < .001; for Asian Americans, difference = 0.89, t(228) = 4.06, p < .001; and for Native Americans, difference = 0.76, t(228) = 3.32, p < .001. Finally, supporting Hypothesis 2, the prejudice ratings for these three groups did not significantly differ from one another, F(2, 456) = 1.42, p = .24, [eta]2 = .006. Thus, although our participants expressed similar overall negativity toward African Americans, Asian Americans, and Native Americans, they nonetheless reported different discrete emotional reactions toward them. This strongly suggests that measures of general prejudice can indeed mask a rich diversity of discrete emotional reactions. As presented in Figure 2 , activist feminists, fundamentalist Christians, and gay men also differed significantly in the patterns of discrete emotions they elicited in our participants. Moreover, they all elicited substantial amounts of negative prejudice: Prejudice difference scores for feminists = 3.38, t(228) = 11.20, p < .001; for fundamentalist Christians, difference = 3.37, t(228) = 10.51, p < .001; and for gay men, difference = 2.78, t(228) = 8.75, p < .001. Yet here again, the prejudice ratings for these three groups did not differ from one another, F(2, 456) = 1.47, p = .231, [eta]2 = .006. This pattern, too, illustrates that measures of overall prejudice can mask a notable diversity of discrete emotional reactions. Hypothesis 3: Different Groups Can Evoke Qualitatively Different Profiles of Perceived Threats We performed a two-way (Target Group ? Threat Perceived) repeated-measures ANOVA on the mean difference threat ratings; a significant Target Group ? Threat Perceived interaction would reveal that different groups can indeed be viewed as posing different profiles of threat. As predicted, this interaction emerged as a significant effect, F(63, 14427) = 46.15, p < .00001, partial [eta]2 = .168; Table 4 presents the means and standard deviations for all threat ratings for all groups. These patterns of perceived threats provide substantial support for Hypothesis 3: People may indeed perceive different patterns of specific threats from different groups. For the purpose of more clearly illustrating this effect, we present in Figure 3 the patterns of threats people perceived from activist feminists, African Americans, and fundamentalist Christians. Figure 3. Participants' mean threat perceptions for activist feminists, African Americans, and fundamentalist Christians, relative to threat perceptions for European Americans. A repeated-measures analysis of variance on the threat ratings for these three groups revealed a significant Target Group ? Threat Perceived interaction, F(18, 4122) = 30.05, p < .00001, partial [eta]2 = .116, indicating that participants perceived different patterns of threat from these groups and thus illustrating support for Hypothesis 3. Reciprocity (Choice) = nonreciprocity by choice; Reciprocity (Inability) = nonreciprocity by inability. [Graphic] [Help with image viewing] [Email Jumpstart To Image] Table 4 Means and Standard Deviations of Threat Perceptions (Relative to European Americans) Hypothesis 4: General Measures of Threat Will Often Mask the Variation Across Groups in Evoked Threat Profiles Our participants often believed that different groups threatened America in different ways. Hypothesis 4 would be supported if groups that evoked distinct threat profiles nonetheless evoked similar levels of general threat. Indeed, as seen in Table 4 , many groups that differed from one another in the profiles of specific threats ostensibly posed also evoked similar perceptions of general threat. We illustrate this general pattern with the subset of groups presented in Figure 3. As presented in Figure 3 , our participants viewed African Americans, activist feminists, and fundamentalist Christians as posing significantly different profiles of threat. Moreover, these groups are all viewed as generally threatening?the scores all differ from the European American baseline. For the general threat posed by African Americans, difference = 0.87, t(229) = 7.27, p < .001; for activist feminists, difference = 0.76, t(229) = 5.87, p < .001; and for fundamentalist Christians, difference = 0.85, t(229) = 5.85, p < .001. Finally, supporting Hypothesis 4, the general threat ratings for these groups do not differ from one another, F(2, 458) = 0.24, p = .789, [eta]2 = .001. Thus, just as a focus on general prejudice can mask an interesting and rich diversity of functionally important emotions evoked by groups, a focus on general threat can mask an interesting and rich diversity of specific threats the groups are seen as posing. We have seen, then, strong support for Hypotheses 1?4. In addition, we note that Cottrell, Neuberg, and Asher (2004) used nearly identical procedures and measures in three additional samples. These other studies demonstrate patterns of threat perceptions and affective reactions strikingly similar to the ones we reported here and thus strongly corroborate our findings.7 Hypothesis 5: Profiles of the Specific Threats Posed by Different Groups Will Reliably and Systematically Predict the Emotion Profiles Evoked by These Groups If intergroup emotion indeed represents a functional response to intergroup threat, then we should observe the hypothesized threat-emotion links articulated in Table 2 . We explored these hypothesized connections using two essentially independent tests?one based on correlations among the measures, the other based on means of the measures. Multiple regression approach To predict each discrete emotion from the 10 specific threats, controlling for the influence of the other threats, we pursued a multiple regression strategy. The intercorrelations among specific threats and between all threats and all emotions were substantial, however, leading to special statistical problems (e.g., multicollinearity, suppression) and rendering findings from these models hard to interpret. We thus used instead the threat classes articulated in Table 2 . Specifically, we averaged the 6 threats from the ?obstacles? category (i.e., threats to economic resources, property, personal freedoms, reciprocity [by choice], social coordination, and trust), and the 2 threats from the ?contamination? category (threats to group health and values). The 2 remaining threats?of physical danger and nonreciprocity because of inability?were represented as before.8 We examined threat-emotion relations across target groups. Recall that participants rated all nine groups on threat perceptions and emotional reactions. To avoid complex technical issues related to nonindependence of data, each participant was randomly assigned to provide threat and emotion ratings for only one of the target groups, thereby yielding approximately equal numbers of entries for each group. This random sample of the complete data set thus contained information on all four threat categories and all five discrete emotions across the nine target groups; this enabled us to perform five regression analyses, each predicting one emotion from the four threat categories, thereby allowing us to assess the independent predictive ability of each threat for each emotion. Because a huge number of different subsamples could be randomly drawn from the complete sample, we conducted these analyses on 50 randomly selected subsamples to reduce the likelihood of drawing conclusions from data patterns idiosyncratic to particular chance samplings. Though not identical, this strategy is somewhat similar to bootstrapping and resampling procedures. In Table 5 , we present the mean standardized regression coefficients, averaged across the 50 random subsamples, for each threat category in the regression of each emotion. Note that the general pattern of regression coefficients provides yet another demonstration of the problem associated with conceiving intergroup affect and threat as unidimensional constructs: Different intergroup emotions are predominantly associated with different classes of threat. We turn now to the regression analyses for each emotion, in turn. Table 5 Regressions of Each Emotion on Threat Categories In line with the hypothesized theoretical connections articulated in Table 2, we expected anger to be independently predicted by obstacle threats; this was clearly the case (average [beta] = .58, p < .001). We also hypothesized that anger might be a secondary emotional reaction to threats to group health and group values; the contamination category did indeed predict anger (average [beta] = .11, p < .001). We also speculated that anger might be secondarily associated with threats to physical safety and reciprocity (because of lack of ability); these speculations were not supported. Second, we expected that disgust would be independently predicted by contamination; this hypothesis, too, was strongly supported (average [beta] = .35, p < .001). We also thought that two of the obstacle threats in particular (i.e., to personal freedoms and reciprocity relations) might secondarily predict disgust; although obstacle threat as a general class did independently predict disgust (average [beta] = .36, p < .001), the outcomes of our more specific speculations were clearly mixed (see Table 6). Table 6 Regressions to Test Exploratory, Secondary Predictions Third, we expected that fear would be independently predicted by physical safety threat. This hypothesis was strongly supported (average [beta] = .37, p < .001), as was our general secondary prediction that obstacle threats might also independently predict fear (average [beta] = .30, p < .001). A perusal of Table 6, however, reveals that the success of our specific secondary predictions regarding specific obstacles was mixed. Fourth, we expected that pity would be independently predicted by the inability to reciprocate, and this was clearly the case (average [beta] = .17, p < .001). Our lone secondary hypothesis?that pity would also be independently associated with the possibility of disease contamination?was supported as well: Contamination in general predicted pity (average [beta] = .20, p < .001); however, this was due to both the disease and the values components of the contamination aggregate (see Table 6). Finally, we expected that envy would be independently predicted by the obstacle of economic threat. Consistent with this, envy was predicted by obstacle threat in the aggregate (average [beta] = .18, p < .001). Moreover, a perusal of Table 6 reveals that this obstacles-envy link was indeed driven largely by economic threat in particular. In sum, our primary predictions regarding the functional links between threat classes and their affiliated emotions find strong support in these data: Obstacle threat emerged as an independent predictor of anger, contamination threat emerged as an independent predictor of disgust, physical safety threat emerged as an independent predictor of fear, and reciprocity threat because of inability emerged as an independent predictor of pity. In addition, many of our secondary predictions were borne out as well. Indeed, taking stock of the 20 entries in Table 5 , we see that (a) all four of our primary hypotheses (bolded entries) were supported and (b) five of our eight secondary hypotheses (italicized entries) were supported. As further support of our hypotheses, we note that no threat class expected to show a secondary association with an emotion emerged as a better independent predictor than the threat class expected to show a primary association with that emotion. Though our accuracy in predicting null findings (entries in conventional font) may appear less than ideal, these mean regression coefficients are numerically rather small and, in fact, never exceed a coefficient whose significance is expected as the result of a primary or secondary prediction. This overall success rate can be contrasted with the straightforward alternative that there is no specificity of links between threat classes and emotions, operationalized such that no threat classes independently predict specific emotions or that all threat classes independently and equivalently predict all emotions?neither of which received empirical support from our data. Hypothesis 5 addresses the crux of our theoretical arguments?the notion that specific threats elicit functionally focused emotions. To fully appreciate intergroup emotions, then, the focus should be on specific threat perceptions rather than on the particular group thought to pose a threat. In this sense, the nine target groups considered in this research are secondary in interest to the threats associated with each group. Threats, as compared with target group, should be better predictors of emotions. This, of course, is an empirical question: How well do target groups per se predict emotional response after controlling for the four threat classes? To better gauge the size of this effect, we dummy coded the target groups and compared the proportions of variation in each emotional reaction explained by four effects: effect of threats, effect of threats controlling for target group, effect of target group, and effect of target group controlling for threats. In Table 7, we present the mean [DELTA]R2 values, averaged across the 50 random subsamples for each of these effects in the regression of each emotion. First, we note that the threat classes, as a set, account for a substantial amount of variation in each emotion (especially in the cases of anger, disgust, and fear). Moreover, this effect remains sizable after controlling for the target group being rated. In comparison, target group tends to account for a much smaller, though still significant, portion of variation in emotional response. Crucial to our theoretical arguments, this effect significantly decreases even further after controlling for the threat classes. Threat perceptions therefore appear (at least) to partially mediate the observed group differences in emotional reactions. In theory, we would have hoped for complete mediation. Of course, even if complete mediation by threats exists, it would be difficult to uncover in this investigation because (a) we have not included in our analyses all threat perceptions relevant to emotional reactions (we do not claim to be providing a veritable census of threats); (b) for statistical stability reasons given our sample size, we only estimated main effects of threat perceptions on emotional reactions, thereby not including any of the ways in which the many possible interactions among our 10 threats might account for apparent target group effects; and (c) none of our threat perceptions and emotional reactions were measured perfectly, without error. It is thus the case that the unique effects of target group on emotions, as small as they are, actually overestimate their true sizes. Using the multiple regression approach, then, we see strong support for Hypothesis 5.9 Table 7 Regressions to Compare Effects of Group Type and Threat Perceptions Cluster analytic approach Hypothesis 5 posits, generally, that emotion profiles map onto threat profiles. In addition to the multiple regression approach, then, one can alternatively test this hypothesis by assessing the extent to which groups seen to pose similar patterns of threat also evoke similar patterns of emotion. To the extent they do not, support for Hypothesis 5 would be weakened. Cluster analysis (Hair, Anderson, Tatham, & Black, 1992 ) is a ?technique for grouping individuals or objects into clusters so that objects in the same cluster are more like each other than they are like objects in other clusters? (p. 265), and recent uses of this analysis (Fiske et al., 2002; Lickel et al., 2000 ) have indeed proven valuable in identifying clusters of groups and their common characteristics. As a multivariate technique, cluster analysis is especially well suited for our purposes, because it can calculate similarities and differences among mean profiles of multiple threat or emotion ratings. One convenient technical implication of this is that cluster analysis is not susceptible to issues of multicollinearity and suppression, both of which complicated our attempt to perform simple multiple regression analyses using the 10 specific threats. A second implication is that it essentially provides an independent test of the hypothesis using the same data set. We began by averaging the participants' ratings of each of the nine groups for each of the 10 specific threats and each of the five discrete emotions. Though we could cluster analyze threat and emotion scores representing differences relative to the threat and emotion scores for European Americans (as we did when testing Hypotheses 1?4, for the reasons discussed above), we chose to average and analyze original threat and emotion ratings for all nine groups, thereby including European Americans as a group in the cluster analyses. We expected that the participants' in-groups (that is, European Americans and nonfundamentalist Christians) might form a single cluster, with threat and emotion profiles differing from those of the other clusters. The use of original threat and emotion ratings for European Americans, as well as the other groups, allows us to explore this idea and to better examine similarities and differences in the profiles. We note that cluster analyses on the difference scores and cluster analyses on the original scores yield identical results (except for the necessary absence of European Americans from the cluster solutions for difference scores). Following the advice and example presented in Hair et al. (1992) , we used two types of cluster analysis, each serving a different purpose. Hierarchical cluster analysis is particularly useful to determine the optimal number of clusters present in the data, whereas k-means cluster analysis is particularly useful to determine the arrangement of the nine groups within these clusters. Hierarchical cluster analysis operates on a similarity matrix containing similarity indices among the objects being clustered (in this case, the nine target groups), using some set of characteristics of each object (the profiles of 10 threats or five emotions). These similarity measures, which involve no decisions about the appropriate number of clusters within the data, offer an additional straightforward test of Hypothesis 5: To the extent there exists a positive correlation between threat similarity measures and emotion similarity measures, then Hypothesis 5 is further supported. Because it is the most commonly used measure of interobject similarity (Hair et al., 1992 ), we calculated the euclidean distance between each pair of objects two times, once using the threat profiles and once using the emotion profiles. The correlation coefficient between these threat-based distances and emotion-based distances was .41 (p = .013), indicating that groups that are similar on threat profiles are also similar on emotion profiles, supporting Hypothesis 5. We note above that these similarity measures offer no explicit information about the optimal number of clusters in the data or the arrangement of groups into the clusters. As an extension of the demonstrated relationship between the threat and emotion similarity measures, however, we might reasonably expect the specific arrangement of groups into ?threat clusters? to map onto the specific arrangement of groups into ?emotion clusters.? Because our theoretical framework assigns causal priority to perceived threats, we first performed hierarchical cluster analysis (using Ward's method) on the ratings of the specific threats ostensibly posed by the nine target groups. Although decisions about the best fitting number of clusters are inherently subjective, we adhered to conventional decision rules as outlined in Hair et al. (1992), Blashfield and Aldenderfer (1988), and Everitt and Dunn (2001) . The agglomeration schedule of a hierarchical cluster analysis specifies the groups that are merged in each stage of the analysis and provides coefficients that indicate the distances between these newly merged groups. Because a large agglomeration coefficient indicates that two relatively different groups have been combined, typical guidelines suggest selecting the number of clusters prior to a large increase in the agglomeration coefficient. Guided by these decision rules, a five-cluster solution offered the best fit to the threat profile data. We next turned to k -means cluster analysis on the threat ratings to determine how the nine target groups fit into the five clusters. Because differences in the randomly chosen initial cluster centers may alter the final cluster solution (Hair et al., 1992 ), we conducted this analysis multiple times on the same data configured in different arrangements to establish the most stable five-cluster solution, which is presented in the left side of Table 8. Table 8 Five-Cluster Solutions From Threat and Emotion Cluster Analyses Moving to emotional responses, k -means cluster analysis was next performed on the ratings of the discrete emotions participants experienced when considering these same nine groups. As noted above, we give causal precedence to perceived threats. We therefore constrained this analysis of the emotion ratings to a five-cluster solution, because this was the most appropriate solution for the threat ratings. This cluster analysis was also performed multiple times to establish the most stable five-cluster solution, which is presented in the right side of Table 8. As Table 8 clearly shows, there is great overlap between the clusters emerging from the analysis of threat perceptions and the clusters emerging from the analysis of emotional experiences: Groups seen as similar in the patterns of threats they pose were also seen as similar in the patterns of emotions they elicited. Indeed, the only difference between the two cluster analyses involves the movement of Asian Americans: In the threat analysis, Asian Americans clustered with Native Americans (Cluster 4) because of the perception that these two groups both hold values inconsistent with mainstream American values. In the emotions analysis, Asian Americans clustered with European Americans and nonfundamentalist Christians because of the relatively little threat-related affect elicited by these groups. Aside from this single change, however, the two cluster solutions, derived from analyses of different judgments, are strikingly similar. The probability of observing a perfect replication with all nine groups considered in the calculation, merely by chance, is .00006. We adjusted this value to account for the ?defection? by the single group (i.e., Asian Americans); the probability of observing this slightly imperfect match between the two cluster solutions merely by chance remains a very small .0003.10 In sum, groups that clustered together on perceived threat also (nearly perfectly) clustered together on elicited emotions, thereby providing further support for Hypothesis 5. Discussion We derived five general hypotheses from our sociofunctional analysis of intragroup and intergroup relations and tested them by examining European American participants' reactions to a variety of ethnic, religious, and ideologically oriented groups encountered frequently within the United States. As predicted, (a) different groups can evoke different profiles of emotions; (b) prejudice, as traditionally measured, can obscure the rich texture of these emotional experiences; (c) different groups are often believed to pose different profiles of threat to one's in-group; and (d) measures of general threat can mask the rich texture of these threat perceptions. We believe these data are the first to provide straightforward empirical support for Hypotheses 2 and 4. Two sets of analyses also support our fifth hypothesis?that emotional experience arises systematically from threat perception: (a) The perception of particular threats predicted the experience of functionally associated emotions, and (b) groups that elicited similar threat profiles also elicited similar emotion profiles. Although each statistical technique has its own limitations, the cumulative evidence from these analyses offers strong support for Hypothesis 5. Of course, a stronger causal test of Hypothesis 5 is impossible given the correlational nature of our data and participants' preexisting feelings toward and beliefs about the real-world groups we selected. A more rigorous test would require participants to respond to novel or artificial groups about which we could systematically manipulate specific patterns of threats and subsequently measure patterns of emotional response.11 Contributions of the Present Data Our data illustrate quite clearly that the traditional operationalization of prejudice?as a general attitude?can obscure the richness of emotional experience that groups elicit from others: People do not merely experience evaluative valence when encountering members of groups but instead experience discrete emotions. Moreover, as our threat and emotion profiles make clear, groups cannot be simply characterized as posing one particular threat or as eliciting one particular emotion. Rather, groups are seen to pose multiple threats and to elicit a variety of emotions, often in interesting combinations. In all, then, the negative implications of adhering to the traditional view of prejudice may be substantial. Just as emotion profiles varied across groups, so did threat profiles. The current data thus also suggest a complement to the traditional view of stereotype as trait. Specifically, the sociofunctional approach presumes that the most important stereotypical knowledge should be knowledge that is relevant to the threats and opportunities the out-group provides for the in-group. Indeed, we suspect that most stereotypical knowledge can usefully be framed in terms of the stable beliefs about the threats and opportunities groups are seen to pose. That is, particular groups are stereotypically characterized as lazy because they are perceived to contribute less than their fair share, as aggressive because they are perceived to threaten physical safety, and so on. More generally, we should note the uniqueness of the data we report here. In terms of affect, we have gathered data about a wide range of emotional reactions people have toward a variety of groups. Although a few others have assessed such a range of emotions, they have mitigated somewhat the value of doing so by aggregating over them (e.g., Brewer & Alexander, 2002; Dijker, 1987; Esses et al., 1993; Fiske et al., 2002 ). In terms of beliefs, we have begun to document a wide variety of threats that may be stereotypically linked to a variety of groups well known within the United States; to our knowledge, no similar data set exists. Because we have maintained the discrete nature of the assessed emotions and threats, other researchers testing hypotheses of intergroup affect and threat gain access to a useful, rich set of data. Beyond their usefulness for testing our hypotheses, then, these data should also provide researchers with textured descriptive data about how (at least some) people view and feel toward a range of different groups. We acknowledge, of course, that the reactions of our European American university students to specific groups will not correspond perfectly with the reactions of others in different places and at other times. The threats, and resultant emotional reactions, that members of a particular group associate with another group should emerge from the functional relationship between the two groups as well as associated sociohistorical factors (e.g., Brewer & Alexander, 2002; Fiske et al., 2002 ). Thus, the current sample should represent well the emotional and stereotypical content held by other samples only to the extent they share similar functional relationships with the groups we have explored here. To the extent, however, that other samples have different functional relationships with these target groups, they should form different threat profiles and experience a different configuration of emotions. For example, because African American and Mexican American respondents differ in the threats they see European Americans posing to their own groups, they should also differ in their emotional reactions to European Americans?and they do (Cottrell & Neuberg, 2003). Note, however, that such variation across perceiver samples in the specific threat perceptions and feelings evoked by target groups does not imply that these samples will exhibit different mappings between specific threats and specific emotions: Regardless of sample, we expect that particular profiles of emotional experience (e.g., those dominated by fear) will emerge systematically from conceptually relevant profiles of threat (i.e., those dominated by perceived threat to physical safety). In a similar vein, individuals who differ from one another in their inclinations toward particular threat appraisals (e.g., individual differences in perceived vulnerability to disease) should differ from one another in the particular intergroup emotions they typically experience (e.g., disgust; see Schaller, Park, & Faulkner, 2003). A careful look at the emotion profile for Asian Americans reveals a potentially interesting discovery: Our European American participants reported significant general negative prejudice toward Asian Americans but little or no specific threat-related emotions. Across at least four data sets collected by our lab, we have consistently found a similar affect profile for Asian Americans (although slight envy emerges in some samples). This anomaly may be the result of simple and relatively uninteresting causes. In particular, we might surmise that reports of envy toward Asian Americans could appear unjustified or ?unsportsmanlike? in a society that so values meritocracy. That is, Americans may tend to view Asian American successes as deserved achievements and thus may be reluctant to admit to or report feeling envious of them. Alternatively, it may be that the high status accorded to Asian Americans may be identity threatening, leading individuals to experience specific emotions other than those explored here, such as schadenfreude (pleasure in another's misfortune)?an emotion potentially directed toward high-status groups that come upon hard times.12 In all, we are intrigued by the various possibilities and encourage other researchers to explore more deeply specific feelings toward Asian Americans and other higher status groups. Finally, because research findings lend support to the theoretical frameworks that hypothesize their existence, the current data support the usefulness of our broader sociofunctional approach. That these data may also be viewed as consistent with predictions generated from alternative frameworks does not preclude their value for the sociofunctional approach as well; we return to this point below. Related Theoretical Perspectives We overviewed in the introduction other research programs and perspectives that take seriously the potential roles that intergroup emotions and tangible intergroup threats play in characterizing prejudice and intergroup relations. Here, within the context of addressing several important theoretical issues, we briefly highlight some similarities and differences among these alternative, and sometimes complementary, approaches. Specificity of Emotion, Specificity of Threat Along with others, we propose that the traditional view of prejudice as general attitude is too gross. As our data indeed demonstrate, prejudices go beyond mere negative feelings toward groups to also reflect patterns of specific emotions?anger, fear, disgust, and the like?patterns that conventional measures of prejudice mask. This recognition is important because, as reviewed above, qualitatively different emotions tend to be associated with qualitatively different actions: People have the urge to aggress against those who anger them, escape those who frighten them, and avoid close contact with those who disgust them. Researchers who thus ignore the differences in emotion profiles elicited by different groups will have great difficulty making fine-grained predictions about intergroup behavior. Of course, if one's aim is only to predict whether a group is likely to be discriminated against, in general, then a general attitude assessment may indeed be sufficient. We suspect, however, that there are important implications, theoretical and practical, of being discriminated against via attack, avoidance, or quarantine, and so we prefer the finer level distinctions. Proponents of alternative models of intergroup affect generally share this view, although there exist some important differences in preferred level of emotion specificity. For instance, Dijker, Fiske, and their colleagues (Dijker, 1987; Dijker, Koomen, et al., 1996; Fiske et al., 1999, 2002 ) have used exploratory factor analyses to reduce the number of specific emotions they actually assess to a somewhat smaller set to be analyzed. We think this strategy is less than ideal, for several reasons. First, by its very nature, exploratory analyses force data aggregation in a manner uninformed by insights from the emotions literature, which is increasingly recognizing important distinctions among the different emotions (e.g., Ekman, 1999 ). Second, such an aggregation strategy increases the likelihood that functionally important emotions may be artificially eliminated from investigation because of idiosyncratic features of the analysis (e.g., the other emotions judged, the criteria chosen to select factor dimensions, the relative reliabilities of the different items). Third, for the same reasons that exploratory factor analyses may lead one to omit theoretically relevant emotions, they may also lead one to overaggregate emotions. Finally, the strategy of data-driven aggregation can lead to groups being characterized as similar when they are not. In the research we report here, we have chosen to maintain the demonstrated distinctions among potential intergroup emotions. Note that if our choice of this finer grain size were a poor one, the hypothesized differences in emotion profiles across groups would not have materialized; if contempt, for example, were the more appropriate level of analysis, then we would have observed no differences in participants' reports of anger and disgust. Participants did indeed make such differentiations, however, lending support to our chosen level of affect specificity. We have taken a similar view when contemplating the appropriate specificity at which to consider intergroup threat. In particular, we rely on a theory-driven analysis in which distinct threats remain empirically distinguished from each other. Recall that the revised integrated threat theory (Stephan & Renfro, 2002; Stephan & Stephan, 2000 ) posits that four general categories of constructs?realistic threats to the in-group, symbolic threats to the in-group, realistic threats to the individual, symbolic threats to the individual?are important in intergroup relations. There is clearly some overlap in our approaches. However, in the absence of finer distinctions among threats, revised integrated threat theory will be unable to account for the observed variation in emotional responses to different groups within each umbrella category. Alternative Appraisal Theories The perspectives on intergroup emotions we discuss here share the assumption that different emotions emerge from different appraisals. The approaches differ, however, in their underlying appraisal frameworks. Our sociofunctional perspective proposes that perceptions of specific threats to (and opportunities for) tangible in-group resources and group structures and processes lead to specific intergroup emotions. We articulate our underlying threat-based appraisal theory in detail?see Table 2 ?and have tested its usefulness via multiple regression and cluster analyses. One implication of this appraisal approach is that it allows for the possibility that groups can be perceived as posing multiple threats to one's own group. This, in turn, suggests the value of examining profiles of perceived threats?a value validated by the findings reported here. In contrast to our threat-based appraisal system, the stereotype content model and image theory look for the sources of emotional response in appraisals of the structural relationships between groups. The stereotype content model (Fiske et al., 1999, 2002 ) proposes that intergroup emotions result from individuals' assessments of other groups' warmth (warm vs. cold) and competence (competent vs. incompetent), which emerge from perceptions of each group's competition and status, respectively. These warmth and competence dimensions combine to form a matrix of four possible general views of other groups, and each quadrant engenders a different emotion. An implication of this framework, then, is an exclusive focus on these four emotions (admiration, envy, pity, and contempt). In addition to neglecting the common intergroup emotion of fear, then, and aggregating across anger and disgust, this view does not straightforwardly imply the usefulness of characterizing prejudices in terms of emotional profiles. Along slightly different lines, image theory (M. G. Alexander et al., 1999; Brewer & Alexander, 2002 ) suggests that emotional responses arise from perceptions of other groups on three dimensions: competition, status, and power. Different configurations of these appraisal dimensions produce different images of the other groups, and each image evokes unique specific emotions. Because groups are presumably represented by only one image, image theory also does not straightforwardly suggest the value of characterizing prejudices in terms of emotion profiles. The comprehensive operating appraisal framework underlying IET (e.g., Mackie et al., 2000 ) has not been explicitly articulated but appears to be based on an integration of existing appraisal theories of emotion (prominently cited are Frijda, 1986; Roseman, 1984; Scherer, 1988; C. A. Smith & Ellsworth, 1985 ). That IET theorists have tended to focus their empirical work narrowly on individual components of their apparent appraisal framework may explain an empirical difficulty they recently encountered. Specifically, they predicted that in intergroup situations involving potential threats to personal freedoms and beliefs, participants would respond with anger toward the out-group if the in-group was relatively strong and with fear if the in-group was relatively weak; only the predicted anger reaction emerged, however (Mackie et al., 2000 ). In a later study, however, in which participants faced a scenario involving physical altercation, the predicted fear response was obtained (Devos et al., 2002 ). These findings?though not initially predicted by the IET theorists?are consistent with our threat-based appraisal framework, in which threats to physical safety elicit fear, and obstructions of important goals elicit anger. Nonetheless, some of the similarities between our two approaches appear striking enough that we have suggested elsewhere that one might profitably view the IET and a sociofunctional perspective as complementary, with the IET nested within the broader sociofunctional approach (Neuberg & Cottrell, 2002). Theoretical Breadth We note one additional difference between the sociofunctional framework and the alternatives we have discussed here. Whereas these others are explicitly about prejudice, intergroup affect, or stereotype content, ours is not. The foundation of the sociofunctional framework is an understanding of the universal nature of intragroup structures and processes, and from the foundations of the developing theory, we have derived implications for intergroup affect. However, we have also derived implications for the personal characteristics and traits that people are likely to value (and devalue) for different kinds of groups, for the aspects of self that people are likely to present or manufacture in different social settings, for the kinds of social information that perceivers are especially likely to seek and attune themselves to, for the areas in which legal systems across the globe ought to be similar (or different) from one another, for commonalities (and differences) in the social teachings of different religions, and so forth. We have begun to accumulate data in several of these domains, and they are proving to be consistent with the sociofunctional approach (e.g., Cottrell & Neuberg, 2004; Cottrell, Neuberg, & Li, 2003; Neuberg & Story, 2003 ). The sociofunctional framework is thus broader in its scope. All else being equal, this lends some degree of advantage to it over alternative, but narrower, frameworks. Closing Remarks There can be little doubt that the concept of prejudice has been a useful one and will remain useful to the extent that one is primarily interested in making general predictions across a broad class of discriminatory behaviors. As with most scientific endeavors, however, the deeper one wants to probe and the more one wants to understand, the more precise and textured one's conceptual and operational tools must become. The data reported here clearly illustrate that the traditional view of prejudice?conceptualized as a general attitude and operationalized via simple evaluation items?is often too gross a tool for understanding the often highly textured nature of intergroup affect. Moreover, we believe the sociofunctional approach is better able to account for these findings than current alternatives, none of which make the full set of predictions we have tested here. Finally, many of the currently dominant theoretical explorations of prejudice focus on process?on how prejudices are activated, how they influence cognition and action, how individual and group variables influence these processes, and so forth. By focusing on the contents of social and intergroup relations, we believe the sociofunctional approach provides an important complement to these models. References Alexander, M. G., Brewer, M. B., & Herrmann, R. K. (1999). Images and affect: A functional analysis of out-group stereotypes. Journal of Personality and Social Psychology, 77, 78?93. [Context Link] Alexander, R. D. (1974). The evolution of social behavior. Annual Review of Ecology and Systematics, 4, 325?384. [Context Link] Allport, G. W. (1954). The nature of prejudice. Cambridge, MA: Addison Wesley. [Context Link] Barchas, P. (1986). A sociophysiological orientation to small groups. In E. Lawler (Ed.), Advances in group processes (Vol. 3, pp. 209?246). Greenwich, CT: JAI Press. [Context Link] Blashfield, R. K., & Aldenderfer, M. S. (1988). The methods and problems of cluster analysis. In J. R. Nesselroade & R. B. Cattell (Eds.), Handbook of multivariate experimental psychology (pp. 447?473). New York: Plenum Press. [Context Link] Branscombe, N. R., Doosje, B., & McGarty, C. (2002). Antecedents and consequences of collective guilt. In D. M. Mackie & E. R. Smith (Eds.), From prejudice to intergroup relations: Differentiated reactions to social groups (pp. 49?66). New York: Psychology Press. [Context Link] Brewer, M. B. (1997). On the social origins of human nature. In C. McGarty & S. A. Haslam (Eds.), The message of social psychology: Perspectives on mind in society (pp. 54?62). Cambridge, MA: Blackwell. [Context Link] Brewer, M. B. (2001). Ingroup identification and intergroup conflict: When does ingroup love become outgroup hate? In R. Ashmore, L. Jussim, & D. Wilder (Eds.), Social identity, intergroup conflict, and conflict reduction (pp. 17?41). New York: Oxford University Press. [Context Link] Brewer, M. B., & Alexander, M. G. (2002). Intergroup emotions and images. In D. M. Mackie & E. R. Smith (Eds.), From prejudice to intergroup relations: Differentiated reactions to social groups (pp. 209?225). New York: Psychology Press. [Context Link] Brewer, M. B., & Brown, R. (1998). Intergroup relations. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), The handbook of social psychology (4th ed., pp. 554?594). New York: McGraw-Hill. [Context Link] Brewer, M. B., & Caporael, L. R. (1990). Selfish genes vs. selfish people: Sociobiology as origin myth. Motivation and Emotion, 14, 237?242. [Context Link] Brown, D. E. (1991). Human universals. New York: McGraw-Hill. [Context Link] Campbell, D. T. (1982). Legal and primary-group social controls. Journal of Social and Biological Structures, 5, 431?438. [Context Link] Carver, C. S., & Scheier, M. F. (1990). Origins and functions of positive and negative affect: A control-process view. Psychological Review, 97, 19?35. Ovid Full Text Bibliographic Links [Context Link] Conway, J. M., & Huffcutt, A. I. (2003). A review and evaluation of exploratory factor analysis practices in organizational research. Organizational Research Methods, 6, 147?168. [Context Link] Cosmides, L., & Tooby, J. (2000). Evolutionary psychology and the emotions. In M. Lewis & J. M. Haviland-Jones (Eds.), Handbook of emotions (pp. 91?115). New York: Guilford Press. [Context Link] Cottrell, C. A., & Neuberg, S. L. (2003, February). From patterns of threat to patterns of behavior: Capturing the complexity of intergroup interaction. Paper presented at the annual meeting of the Society for Personality and Social Psychology, Los Angeles. [Context Link] Cottrell, C. A., & Neuberg, S. L. (2004). How do people present themselves to fellow group members? A sociofunctional analysis of valued and devalued self-presentations. Manuscript in preparation, Arizona State University. [Context Link] Cottrell, C. A., Neuberg, S. L., & Asher, T. (2004). [Threat perceptions and emotional reactions toward different groups]. Unpublished raw data, Arizona State University. [Context Link] Cottrell, C. A., Neuberg, S. L., & Li, N. P. (2003, February). What do people want in a group member? A biocultural analysis of valued and devalued characteristics. Poster presented at the annual meeting of the Society for Personality and Social Psychology, Los Angeles. [Context Link] Devine, P. G., & Elliot, A. J. (1995). Are racial stereotypes really fading? The Princeton trilogy revisited. Personality and Social Psychology Bulletin, 11, 1139?1150. [Context Link] Devos, T., Silver, L. A., Mackie, D. M., & Smith, E. R. (2002). Experiencing intergroup emotions. In D. M. Mackie & E. R. Smith (Eds.), From prejudice to intergroup emotions: Differentiated reactions to social groups (pp. 111?134). New York: Psychology Press. [Context Link] Dijker, A. J. (1987). Emotional reactions to ethnic minorities. European Journal of Social Psychology, 17, 305?325. Bibliographic Links [Context Link] Dijker, A. J., Kok, G., & Koomen, W. (1996). Emotional reactions to people with AIDS. Journal of Applied Social Psychology, 26, 731?748. Bibliographic Links [Context Link] Dijker, A. J., Koomen, W., van den Heuvel, H., & Frijda, N. H. (1996). Perceived antecedents of emotional reactions in inter-ethnic relations. British Journal of Social Psychology, 35, 313?329. Bibliographic Links [Context Link] Dunbar, R. I. M. (1988). Primate social systems. Ithaca, NY: Cornell University Press. [Context Link] Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. Power (Eds.), The handbook of cognition and emotion (pp. 45?60). Sussex, England: Wiley. [Context Link] Ekman, P., & Davidson, R. J. (1994). The nature of emotion: Fundamental questions. New York: Oxford University Press. [Context Link] Ekman, P., & Friesen, W. V. (1975). Unmasking the face: A guide to recognizing emotions from facial clues. Englewood Cliffs, NJ: Prentice Hall. [Context Link] Esses, V. M., & Dovidio, J. F. (2002). The role of emotions in determining willingness to engage in intergroup contact. Personality and Social Psychology Bulletin, 28, 1202?1214. [Context Link] Esses, V. M., Haddock, G., & Zanna, M. P. (1993). Values, stereotypes, and emotions as determinants of intergroup attitudes. In D. M. Mackie & D. L. Hamilton (Eds.), Affect, cognition, and stereotyping: Interactive processes in group perception (pp. 137?166). San Diego, CA: Academic Press. [Context Link] Everitt, B. S., & Dunn, G. (2001). Applied multivariate data analysis (2nd ed.). London: Arnold. [Context Link] Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272?299. [Context Link] Fiske, S. T., Cuddy, A. J., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82, 878?902. [Context Link] Fiske, S. T., Xu, J., Cuddy, A. C., & Glick, P. (1999). (Dis)respecting versus (dis)liking: Status and interdependence predict ambivalent stereotypes of competence and warmth. Journal of Social Issues, 55, 473?491. Bibliographic Links [Context Link] Frijda, N. H. (1986). The emotions. Cambridge, England: Cambridge University Press. [Context Link] Gilbert, G. M. (1951). Stereotype persistence and change among college students. Journal of Personality and Social Psychology, 46, 245?254. [Context Link] Haddock, G., & Zanna, M. P. (1994). Preferring ?housewives? to ?feminists?: Categorization and the favorability of attitudes towards women. Psychology of Women Quarterly, 18, 25?52. Bibliographic Links [Context Link] Haddock, G., Zanna, M. P., & Esses, V. M. (1993). Assessing the structure of prejudicial attitudes: The case of attitudes toward homosexuals. Journal of Personality and Social Psychology, 65, 1105?1118. [Context Link] Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1992). Multivariate data analysis (3rd ed.). New York: Macmillan. [Context Link] Higgins, E. T. (1987). Self-discrepancy: A theory relating self and affect. Psychological Review, 94, 319?340. Ovid Full Text Bibliographic Links [Context Link] Hurh, W. M., & Kim, K. C. (1989). The ?success? image of Asian Americans: Its validity, and its practical and theoretical implications. Ethnic and Racial Studies, 12, 512?538. [Context Link] Izard, C. E. (1978). Human emotions. New York: Plenum Press. [Context Link] Izard, C. E. (1991). The psychology of emotions. New York: Plenum Press. [Context Link] Karlins, M., Coffman, T. L., & Walters, G. (1969). On the fading of social stereotypes: Studies in three generations of college students. Journal of Personality and Social Psychology, 13, 1?16. [Context Link] Katz, D., & Braly, K. (1933). Racial stereotypes of one hundred college students. Journal of Abnormal and Social Psychology, 28, 280?290. [Context Link] Lazarus, R. S. (1991). Emotion and adaptation. New York: Oxford University Press. [Context Link] Leakey, R. E. (1978). People of the lake: Mankind and its beginnings. New York: Avon. [Context Link] Leakey, R. E., & Lewin, R. (1977). Origins: What new discoveries reveal about the emergence of our species and its possible future. New York: Dutton. [Context Link] LeVine, R. A., & Campbell, D. T. (1972). Ethnocentrism: Theories of conflict, ethnic attitudes and group behavior. New York: Wiley. [Context Link] Lewis, M. (1993). Self-conscious emotions: Embarrassment, pride, shame, and guilt. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 563?573). New York: Guilford Press. [Context Link] Lickel, B., Hamilton, D. L., Wieczorkowska, G., Lewis, A., Sherman, S. J., & Uhles, A. N. (2000). Varieties of groups and the perception of group entitativity. Journal of Personality and Social Psychology, 78, 223?246. [Context Link] Lickel, B., Schmader, T., & Barquissau, M. (2004). The evocation of moral emotions in intergroup contexts: The distinction between collective guilt and collective shame. In N. R. Branscombe & B. Doosje (Eds.), Collective guilt: International perspectives (pp. 35?55). New York: Cambridge University Press. [Context Link] Mackie, D. M., Devos, T., & Smith, E. R. (2000). Intergroup emotions: Explaining offensive action tendencies in an intergroup context. Journal of Personality and Social Psychology, 79, 602?616. [Context Link] Mackie, D. M., & Smith, E. R. (Eds.). (2002). From prejudice to intergroup emotions: Differentiated reactions to social groups. New York: Psychology Press. [Context Link] Madon, S., Guyll, M., Aboufadel, K., Montiel, E., Smith, A., Palumbo, P., & Jussim, J. (2001). Ethnic and national stereotypes: The Princeton trilogy revisited and revised. Personality and Social Psychology Bulletin, 27, 996?1010. [Context Link] Nesse, R. M. (1990). Evolutionary explanations of emotions. Human Nature, 1, 261?289. Bibliographic Links [Context Link] Neuberg, S. L., & Cottrell, C. A. (2002). Intergroup emotions: A sociofunctional approach. In D. M. Mackie & E. R. Smith (Eds.), From prejudice to intergroup relations: Differentiated reactions to social groups (pp. 265?283). New York: Psychology Press. [Context Link] Neuberg, S. L., Smith, D. M., & Asher, T. (2000). Why people stigmatize: Toward a biocultural framework. In T. F. Heatherton, R. E. Kleck, M. R. Hebl, & J. G. Hull (Eds.), The social psychology of stigma (pp. 31?61). New York: Guilford Press. [Context Link] Neuberg, S. L., & Story, P. (2003). [Cross-religion similarities in social structures and processes: A test of the sociofunctional approach]. Unpublished raw data, Arizona State University. [Context Link] Niemann, Y. F., Jennings, L., Rozelle, R. M., Baxter, J. C., & Sullivan, E. (1994). Use of free response and cluster analysis to determine stereotypes of eight groups. Personality and Social Psychology Bulletin, 20, 379?390. [Context Link] ?hman, A. (1993). Fear and anxiety as emotional phenomena: Clinical phenomenology, evolutionary perspectives, and information processing mechanisms. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 511?536). New York: Guilford Press. [Context Link] Parrott, W. G. (1991). The emotional experiences of envy and jealousy. In P. Salovey (Ed.), The psychology of jealousy and envy (pp. 3?30). New York: Guilford Press. [Context Link] Plutchik, R. (1980). Emotion: A psychoevolutionary synthesis. New York: Harper & Row. [Context Link] Plutchik, R. (2003). Emotions and life: Perspectives from psychology, biology, and evolution. Washington, DC: American Psychological Association. [Context Link] Richerson, P., & Boyd, R. (1995, January). The evolution of human hypersociality. Paper presented at the Ringberg Castle Symposium on Ideology, Warfare and Indoctrinability, Ringberg, Germany. [Context Link] Roseman, I. J. (1984). Cognitive determinants of emotions: A structural theory. In P. Shaver (Ed.), Review of personality and social psychology: Emotions, relationships, and health (pp. 11?36). Beverly Hills, CA: Sage. [Context Link] Rozin, P., Haidt, J., & McCauley, C. R. (1993). Disgust. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 575?594). New York: Guilford Press. [Context Link] Schaller, M., & Neuberg, S. L. (2004). The nature in prejudice(s). Manuscript submitted for publication. [Context Link] Schaller, M., Park, J. H., & Faulkner, J. (2003). Prehistoric dangers and contemporary prejudices. European Review of Social Psychology, 14, 105?137. [Context Link] Scherer, K. R. (1988). Criteria for emotion-antecedent appraisal: A review. In V. Hamilton, G. H. Bower, N. H. Frijda (Eds.), Cognitive perspectives on emotion and motivation (pp. 89?126). Norwell, MA: Kluwer Academic. [Context Link] Shaver, P. R., Hazan, C., & Bradshaw, D. (1988). Love as attachment: The integration of three behavioral systems. In R. J. Sternberg & M. Barnes (Eds.), The anatomy of love (pp. 68?98). New Haven, CT: Yale University Press. [Context Link] Sherif, M. (1966). In common predicament: Social psychology of intergroup conflict and cooperation. Boston: Houghton Mifflin. [Context Link] Simon, H. A. (1967). Motivational and emotional controls of cognition. Psychological Review, 74, 29?39. Bibliographic Links [Context Link] Smith, C. A., & Ellsworth, P. C. (1985). Patterns of cognitive appraisal in emotion. Journal of Personality and Social Psychology, 48, 813?838. [Context Link] Smith, E. R. (1993). Social identity and social emotions: Toward new conceptualizations of prejudice. In D. M. Mackie & D. L. Hamilton (Eds.), Affect, cognition, and stereotyping: Interactive processes in group perception (pp. 297?315). San Diego, CA: Academic Press. [Context Link] Smith, E. R. (1999). Affective and cognitive implications of a group becoming a part of the self: New models of prejudice and of the self-concept. In D. Abrams & M. A. Hogg (Eds.), Social identity and social cognition (pp. 183?196). Malden, MA: Blackwell. [Context Link] Sober, E., & Wilson, D. S. (1998). Unto others: The evolution and psychology of unselfish behavior. Cambridge, MA: Harvard University Press. [Context Link] Stephan, W. G., & Renfro, C. L. (2002). The role of threat in intergroup relations. In D. M. Mackie & E. R. Smith (Eds.), From prejudice to intergroup emotions: Differentiated reactions to social groups (pp. 191?207). New York: Psychology Press. [Context Link] Stephan, W. G., & Stephan, C. W. (2000). An integrated threat theory of prejudice. In S. Oskamp (Ed.), Reducing prejudice and discrimination (pp. 23?45). Mahwah, NJ: Erlbaum. [Context Link] Tooby, J., & Cosmides, L. (1990). The past explains the present: Emotional adaptations and the structure of ancestral environments. Ethology and Sociobiology, 11, 375?424. [Context Link] Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35?37. Bibliographic Links [Context Link] Weiner, B., Perry, R., & Magnusson, J. (1988). An attributional analysis of reactions to stigmas. Journal of Personality and Social Psychology, 55, 738?748. [Context Link] Wilson, D. S., & Sober, E. (1994). Reintroducing group selection to the human behavioral sciences. Behavioral and Brain Sciences, 17, 585?654. [Context Link] Yee, A. H. (1992). Asians as stereotypes and students: Misperceptions that persist. Educational Psychology Review, 4, 95?132. [Context Link] 1In previous writings and presentations (Cottrell & Neuberg, 2003; Neuberg & Cottrell, 2002) we have described this framework as biocultural. We have changed our labeling of these ideas to sociofunctional to better capture our focus on the functional psychological mechanisms that promote effective and successful social living. Note that this is merely a change in label and not in the content of our approach. [Context Link] 2 We are not suggesting that human sociality emerged because it benefits the survival of the group (i.e., a group selection process; see Sober & Wilson, 1998; Wilson & Sober, 1994 ), but rather because it benefits the overall fitness of the individual. Moreover, our evolution-based arguments should not be interpreted as deterministic (nor, for that matter, should any evolution-based argument); the social processes we propose to understand intergroup affect are far from invariable and inevitable. Indeed, these processes, once explicated, lend themselves nicely to effective practical interventions to reduce the maltreatment of groups and people around the globe (for further discussion, see Schaller & Neuberg, 2004 ). Finally, just because we believe that an evolution-inspired analysis shines light on certain unique complexities of intergroup affect does not in any way imply that the psychological processes and outcomes revealed by our analysis are morally, ethically, or legally justifiable. [Context Link] 3 Groups sometimes provide each other with opportunities as well as threats. However, in light of the great bulk of existing prejudice and intergroup relations research, we focus in this article on patterns of threats and related discrete emotions. [Context Link] 4 In exploratory fashion, we included items designed to assess threats that groups may pose to one's own group's moral standing in the hope that they would uniquely predict feelings of guilt. Unfortunately, we worded the items poorly, and the composite appears instead to capture a more general sense of threat. We thus exclude this composite from all analyses to follow but note that including it alters neither our findings nor our conclusions. [Context Link] 5 For each target group, a chi-square difference test revealed that our a priori 10-factor threat model (10 specific threat factors, each represented by an item pair) demonstrated a good fit to the data (as shown by comparative fit index [CFI], root-mean-square error of approximation [RMSEA], and standardized root-mean-square residual [SRMR] values) and fit the data significantly better than a 1-factor threat model (1 general threat factor, represented by all threat items). Similar support was found for our emotion model: Chi-square difference tests revealed that our a priori 5-factor emotion model (anger, disgust, fear, pity, and envy factors, each represented by an item pair) demonstrated a good fit to the data (as shown by CFI, RMSEA, and SRMR values) and fit the data significantly better than a 1-factor emotion model (1 general emotion factor, represented by all emotion items), again for all target groups. Because anger and disgust are sometimes grouped together by exploratory factor analyses (as in research by Fiske et al., 2002 ), we also compared the 5-factor emotion model with a 4-factor emotion model that combined anger and disgust into 1 factor. For seven of the nine target groups, a chi-square difference test revealed that this 4-factor model fit the data significantly worse than the 5-factor model; for the remaining two target groups, the 4-factor model fit worse than our preferred 5-factor alternative, although not significantly so. In all, the CFAs strongly validate our theory-based decisions to use measures of relatively discrete threats and emotions. [Context Link] 6 Because of our unsuccessful attempt to generate a valid measure of morality threat (see Footnote 4), we were unable to conduct our focal threat-emotion analysis for this threat's associated emotion (i.e., a test of the proposed link between threat to in-group morality and guilt). As a result of this failure to fully test our hypotheses related to guilt, we chose to discard guilt from further analyses. Note that including the discarded items in analyses does not alter any of our conclusions. These complete data are available from the authors by request. [Context Link] 7Some of those findings were reported in preliminary form (Neuberg & Cottrell, 2002). The full data sets from these additional samples are available from the authors on request. [Context Link] 8 CFAs also offer some empirical support for this decision to arrange the 10 specific threats into four threat classes. We tested a higher order threat model with the second-order obstacles factor (on which six first-order threat factors load), the second-order contamination factor (on which two first-order threat factors load), the first-order physical safety threat factor, and the first-order nonreciprocity (by inability) threat factor. For each target group, this model demonstrated a marginally adequate fit to the data (as shown by CFI, RMSEA, and SRMR values). Although this four-factor threat model may be less than ideal to capture relationships among the threats, our current purposes rest with explaining threat-emotion links. As such, we have chosen to use this threat representation in which specific threats believed to elicit the same emotion are clustered together into threat classes. Note that the less than ideal status of this measurement model can only work against our hypotheses relating obstacle threats to anger and contamination threats to disgust. [Context Link] 9 We note two additional pieces of corroborative evidence for Hypothesis 5. First, we tested the hypothesized threat-emotion links using group-level multiple regression analyses with the threat and emotion ratings for each target group averaged across all participants; these analyses are limited by the small sample size (nine target groups), which leaves them drastically underpowered. We also tested the hypothesized threat-emotion links using multilevel models that clustered the target group ratings by participant; these analyses provide an appropriate statistical means to account for the nonindependence of target group ratings. In all, both the group-level regression analyses and the multilevel models revealed similar patterns of specific threat-emotion links to those obtained from the individual-level regression analyses on the random samples. [Context Link] 10 Because we assign causal priority to perceived threat, we wanted to calculate the probability of perfectly replicating the five-cluster solution on the basis of threat ratings (as shown in the left side of Table 8 ) in the emotion cluster analyses. After determining the probability of replicating each individual cluster, we calculated the product of these individual probabilities to obtain the probability of a perfect match between the five-cluster threat solution and an emotion cluster solution. We calculated this probability to be .00006. Because Asian Americans were the only group to ?move? clusters from our threat cluster solution to our emotion cluster solution, we recalculated this probability without Asian Americans. For this probability of perfect replication with only the eight remaining groups, we obtained a value of .0003. Information on these calculations is available from the authors. [Context Link] 11We are currently collecting such experimental data. [Context Link] 12We thank Naomi Ellemers for suggesting this interesting possibility. [Context Link] From checker at panix.com Mon Jul 18 00:06:51 2005 From: checker at panix.com (Premise Checker) Date: Sun, 17 Jul 2005 20:06:51 -0400 (EDT) Subject: [Paleopsych] Rolling Stone: The Long Emergency Message-ID: The Long Emergency http://www.rollingstone.com/news/story/_/id/7203633 Posted Mar 24, 2005 What's going to happen as we start running out of cheap gas to guzzle? By JAMES HOWARD KUNSTLER A few weeks ago, the price of oil ratcheted above fifty-five dollars a barrel, which is about twenty dollars a barrel more than a year ago. The next day, the oil story was buried on page six of the New York Times business section. Apparently, the price of oil is not considered significant news, even when it goes up five bucks a barrel in the span of ten days. That same day, the stock market shot up more than a hundred points because, CNN said, government data showed no signs of inflation. Note to clueless nation: Call planet Earth. Carl Jung, one of the fathers of psychology, famously remarked that "people cannot stand too much reality." What you're about to read may challenge your assumptions about the kind of world we live in, and especially the kind of world into which events are propelling us. We are in for a rough ride through uncharted territory. It has been very hard for Americans -- lost in dark raptures of nonstop infotainment, recreational shopping and compulsive motoring -- to make sense of the gathering forces that will fundamentally alter the terms of everyday life in our technological society. Even after the terrorist attacks of 9/11, America is still sleepwalking into the future. I call this coming time the Long Emergency. Most immediately we face the end of the cheap-fossil-fuel era. It is no exaggeration to state that reliable supplies of cheap oil and natural gas underlie everything we identify as the necessities of modern life -- not to mention all of its comforts and luxuries: central heating, air conditioning, cars, airplanes, electric lights, inexpensive clothing, recorded music, movies, hip-replacement surgery, national defense -- you name it. The few Americans who are even aware that there is a gathering global-energy predicament usually misunderstand the core of the argument. That argument states that we don't have to run out of oil to start having severe problems with industrial civilization and its dependent systems. We only have to slip over the all-time production peak and begin a slide down the arc of steady depletion. The term "global oil-production peak" means that a turning point will come when the world produces the most oil it will ever produce in a given year and, after that, yearly production will inexorably decline. It is usually represented graphically in a bell curve. The peak is the top of the curve, the halfway point of the world's all-time total endowment, meaning half the world's oil will be left. That seems like a lot of oil, and it is, but there's a big catch: It's the half that is much more difficult to extract, far more costly to get, of much poorer quality and located mostly in places where the people hate us. A substantial amount of it will never be extracted. The United States passed its own oil peak -- about 11 million barrels a day -- in 1970, and since then production has dropped steadily. In 2004 it ran just above 5 million barrels a day (we get a tad more from natural-gas condensates). Yet we consume roughly 20 million barrels a day now. That means we have to import about two-thirds of our oil, and the ratio will continue to worsen. The U.S. peak in 1970 brought on a portentous change in geoeconomic power. Within a few years, foreign producers, chiefly OPEC, were setting the price of oil, and this in turn led to the oil crises of the 1970s. In response, frantic development of non-OPEC oil, especially the North Sea fields of England and Norway, essentially saved the West's ass for about two decades. Since 1999, these fields have entered depletion. Meanwhile, worldwide discovery of new oil has steadily declined to insignificant levels in 2003 and 2004. Some "cornucopians" claim that the Earth has something like a creamy nougat center of "abiotic" oil that will naturally replenish the great oil fields of the world. The facts speak differently. There has been no replacement whatsoever of oil already extracted from the fields of America or any other place. Now we are faced with the global oil-production peak. The best estimates of when this will actually happen have been somewhere between now and 2010. In 2004, however, after demand from burgeoning China and India shot up, and revelations that Shell Oil wildly misstated its reserves, and Saudi Arabia proved incapable of goosing up its production despite promises to do so, the most knowledgeable experts revised their predictions and now concur that 2005 is apt to be the year of all-time global peak production. It will change everything about how we live. To aggravate matters, American natural-gas production is also declining, at five percent a year, despite frenetic new drilling, and with the potential of much steeper declines ahead. Because of the oil crises of the 1970s, the nuclear-plant disasters at Three Mile Island and Chernobyl and the acid-rain problem, the U.S. chose to make gas its first choice for electric-power generation. The result was that just about every power plant built after 1980 has to run on gas. Half the homes in America are heated with gas. To further complicate matters, gas isn't easy to import. Here in North America, it is distributed through a vast pipeline network. Gas imported from overseas would have to be compressed at minus-260 degrees Fahrenheit in pressurized tanker ships and unloaded (re-gasified) at special terminals, of which few exist in America. Moreover, the first attempts to site new terminals have met furious opposition because they are such ripe targets for terrorism. Some other things about the global energy predicament are poorly understood by the public and even our leaders. This is going to be a permanent energy crisis, and these energy problems will synergize with the disruptions of climate change, epidemic disease and population overshoot to produce higher orders of trouble. We will have to accommodate ourselves to fundamentally changed conditions. No combination of alternative fuels will allow us to run American life the way we have been used to running it, or even a substantial fraction of it. The wonders of steady technological progress achieved through the reign of cheap oil have lulled us into a kind of Jiminy Cricket syndrome, leading many Americans to believe that anything we wish for hard enough will come true. These days, even people who ought to know better are wishing ardently for a seamless transition from fossil fuels to their putative replacements. The widely touted "hydrogen economy" is a particularly cruel hoax. We are not going to replace the U.S. automobile and truck fleet with vehicles run on fuel cells. For one thing, the current generation of fuel cells is largely designed to run on hydrogen obtained from natural gas. The other way to get hydrogen in the quantities wished for would be electrolysis of water using power from hundreds of nuclear plants. Apart from the dim prospect of our building that many nuclear plants soon enough, there are also numerous severe problems with hydrogen's nature as an element that present forbidding obstacles to its use as a replacement for oil and gas, especially in storage and transport. Wishful notions about rescuing our way of life with "renewables" are also unrealistic. Solar-electric systems and wind turbines face not only the enormous problem of scale but the fact that the components require substantial amounts of energy to manufacture and the probability that they can't be manufactured at all without the underlying support platform of a fossil-fuel economy. We will surely use solar and wind technology to generate some electricity for a period ahead but probably at a very local and small scale. Virtually all "biomass" schemes for using plants to create liquid fuels cannot be scaled up to even a fraction of the level at which things are currently run. What's more, these schemes are predicated on using oil and gas "inputs" (fertilizers, weed-killers) to grow the biomass crops that would be converted into ethanol or bio-diesel fuels. This is a net energy loser -- you might as well just burn the inputs and not bother with the biomass products. Proposals to distill trash and waste into oil by means of thermal depolymerization depend on the huge waste stream produced by a cheap oil and gas economy in the first place. Coal is far less versatile than oil and gas, extant in less abundant supplies than many people assume and fraught with huge ecological drawbacks -- as a contributor to greenhouse "global warming" gases and many health and toxicity issues ranging from widespread mercury poisoning to acid rain. You can make synthetic oil from coal, but the only time this was tried on a large scale was by the Nazis under wartime conditions, using impressive amounts of slave labor. If we wish to keep the lights on in America after 2020, we may indeed have to resort to nuclear power, with all its practical problems and eco-conundrums. Under optimal conditions, it could take ten years to get a new generation of nuclear power plants into operation, and the price may be beyond our means. Uranium is also a resource in finite supply. We are no closer to the more difficult project of atomic fusion, by the way, than we were in the 1970s. The upshot of all this is that we are entering a historical period of potentially great instability, turbulence and hardship. Obviously, geopolitical maneuvering around the world's richest energy regions has already led to war and promises more international military conflict. Since the Middle East contains two-thirds of the world's remaining oil supplies, the U.S. has attempted desperately to stabilize the region by, in effect, opening a big police station in Iraq. The intent was not just to secure Iraq's oil but to modify and influence the behavior of neighboring states around the Persian Gulf, especially Iran and Saudi Arabia. The results have been far from entirely positive, and our future prospects in that part of the world are not something we can feel altogether confident about. And then there is the issue of China, which, in 2004, became the world's second-greatest consumer of oil, surpassing Japan. China's surging industrial growth has made it increasingly dependent on the imports we are counting on. If China wanted to, it could easily walk into some of these places -- the Middle East, former Soviet republics in central Asia -- and extend its hegemony by force. Is America prepared to contest for this oil in an Asian land war with the Chinese army? I doubt it. Nor can the U.S. military occupy regions of the Eastern Hemisphere indefinitely, or hope to secure either the terrain or the oil infrastructure of one distant, unfriendly country after another. A likely scenario is that the U.S. could exhaust and bankrupt itself trying to do this, and be forced to withdraw back into our own hemisphere, having lost access to most of the world's remaining oil in the process. We know that our national leaders are hardly uninformed about this predicament. President George W. Bush has been briefed on the dangers of the oil-peak situation as long ago as before the 2000 election and repeatedly since then. In March, the Department of Energy released a report that officially acknowledges for the first time that peak oil is for real and states plainly that "the world has never faced a problem like this. Without massive mitigation more than a decade before the fact, the problem will be pervasive and will not be temporary." Most of all, the Long Emergency will require us to make other arrangements for the way we live in the United States. America is in a special predicament due to a set of unfortunate choices we made as a society in the twentieth century. Perhaps the worst was to let our towns and cities rot away and to replace them with suburbia, which had the additional side effect of trashing a lot of the best farmland in America. Suburbia will come to be regarded as the greatest misallocation of resources in the history of the world. It has a tragic destiny. The psychology of previous investment suggests that we will defend our drive-in utopia long after it has become a terrible liability. Before long, the suburbs will fail us in practical terms. We made the ongoing development of housing subdivisions, highway strips, fried-food shacks and shopping malls the basis of our economy, and when we have to stop making more of those things, the bottom will fall out. The circumstances of the Long Emergency will require us to downscale and re-scale virtually everything we do and how we do it, from the kind of communities we physically inhabit to the way we grow our food to the way we work and trade the products of our work. Our lives will become profoundly and intensely local. Daily life will be far less about mobility and much more about staying where you are. Anything organized on the large scale, whether it is government or a corporate business enterprise such as Wal-Mart, will wither as the cheap energy props that support bigness fall away. The turbulence of the Long Emergency will produce a lot of economic losers, and many of these will be members of an angry and aggrieved former middle class. Food production is going to be an enormous problem in the Long Emergency. As industrial agriculture fails due to a scarcity of oil- and gas-based inputs, we will certainly have to grow more of our food closer to where we live, and do it on a smaller scale. The American economy of the mid-twenty-first century may actually center on agriculture, not information, not high tech, not "services" like real estate sales or hawking cheeseburgers to tourists. Farming. This is no doubt a startling, radical idea, and it raises extremely difficult questions about the reallocation of land and the nature of work. The relentless subdividing of land in the late twentieth century has destroyed the contiguity and integrity of the rural landscape in most places. The process of readjustment is apt to be disorderly and improvisational. Food production will necessarily be much more labor-intensive than it has been for decades. We can anticipate the re-formation of a native-born American farm-laboring class. It will be composed largely of the aforementioned economic losers who had to relinquish their grip on the American dream. These masses of disentitled people may enter into quasi-feudal social relations with those who own land in exchange for food and physical security. But their sense of grievance will remain fresh, and if mistreated they may simply seize that land. The way that commerce is currently organized in America will not survive far into the Long Emergency. Wal-Mart's "warehouse on wheels" won't be such a bargain in a non-cheap-oil economy. The national chain stores' 12,000-mile manufacturing supply lines could easily be interrupted by military contests over oil and by internal conflict in the nations that have been supplying us with ultra-cheap manufactured goods, because they, too, will be struggling with similar issues of energy famine and all the disorders that go with it. As these things occur, America will have to make other arrangements for the manufacture, distribution and sale of ordinary goods. They will probably be made on a "cottage industry" basis rather than the factory system we once had, since the scale of available energy will be much lower -- and we are not going to replay the twentieth century. Tens of thousands of the common products we enjoy today, from paints to pharmaceuticals, are made out of oil. They will become increasingly scarce or unavailable. The selling of things will have to be reorganized at the local scale. It will have to be based on moving merchandise shorter distances. It is almost certain to result in higher costs for the things we buy and far fewer choices. The automobile will be a diminished presence in our lives, to say the least. With gasoline in short supply, not to mention tax revenue, our roads will surely suffer. The interstate highway system is more delicate than the public realizes. If the "level of service" (as traffic engineers call it) is not maintained to the highest degree, problems multiply and escalate quickly. The system does not tolerate partial failure. The interstates are either in excellent condition, or they quickly fall apart. America today has a railroad system that the Bulgarians would be ashamed of. Neither of the two major presidential candidates in 2004 mentioned railroads, but if we don't refurbish our rail system, then there may be no long-range travel or transport of goods at all a few decades from now. The commercial aviation industry, already on its knees financially, is likely to vanish. The sheer cost of maintaining gigantic airports may not justify the operation of a much-reduced air-travel fleet. Railroads are far more energy efficient than cars, trucks or airplanes, and they can be run on anything from wood to electricity. The rail-bed infrastructure is also far more economical to maintain than our highway network. The successful regions in the twenty-first century will be the ones surrounded by viable farming hinterlands that can reconstitute locally sustainable economies on an armature of civic cohesion. Small towns and smaller cities have better prospects than the big cities, which will probably have to contract substantially. The process will be painful and tumultuous. In many American cities, such as Cleveland, Detroit and St. Louis, that process is already well advanced. Others have further to fall. New York and Chicago face extraordinary difficulties, being oversupplied with gigantic buildings out of scale with the reality of declining energy supplies. Their former agricultural hinterlands have long been paved over. They will be encysted in a surrounding fabric of necrotic suburbia that will only amplify and reinforce the cities' problems. Still, our cities occupy important sites. Some kind of urban entities will exist where they are in the future, but probably not the colossi of twentieth-century industrialism. Some regions of the country will do better than others in the Long Emergency. The Southwest will suffer in proportion to the degree that it prospered during the cheap-oil blowout of the late twentieth century. I predict that Sunbelt states like Arizona and Nevada will become significantly depopulated, since the region will be short of water as well as gasoline and natural gas. Imagine Phoenix without cheap air conditioning. I'm not optimistic about the Southeast, either, for different reasons. I think it will be subject to substantial levels of violence as the grievances of the formerly middle class boil over and collide with the delusions of Pentecostal Christian extremism. The latent encoded behavior of Southern culture includes an outsized notion of individualism and the belief that firearms ought to be used in the defense of it. This is a poor recipe for civic cohesion. The Mountain States and Great Plains will face an array of problems, from poor farming potential to water shortages to population loss. The Pacific Northwest, New England and the Upper Midwest have somewhat better prospects. I regard them as less likely to fall into lawlessness, anarchy or despotism and more likely to salvage the bits and pieces of our best social traditions and keep them in operation at some level. These are daunting and even dreadful prospects. The Long Emergency is going to be a tremendous trauma for the human race. We will not believe that this is happening to us, that 200 years of modernity can be brought to its knees by a world-wide power shortage. The survivors will have to cultivate a religion of hope -- that is, a deep and comprehensive belief that humanity is worth carrying on. If there is any positive side to stark changes coming our way, it may be in the benefits of close communal relations, of having to really work intimately (and physically) with our neighbors, to be part of an enterprise that really matters and to be fully engaged in meaningful social enactments instead of being merely entertained to avoid boredom. Years from now, when we hear singing at all, we will hear ourselves, and we will sing with our whole hearts. Adapted from The Long Emergency, 2005, by James Howard Kunstler From checker at panix.com Mon Jul 18 00:07:00 2005 From: checker at panix.com (Premise Checker) Date: Sun, 17 Jul 2005 20:07:00 -0400 (EDT) Subject: [Paleopsych] Galen Strawson: Against Narrativity Message-ID: Galen Strawson: Against Narrativity Ratio (new series) XVII 4 December 2004 0034-0006 [Thanks to Alice Andrews for this. I'm joyously chucking a whole steamer trunk of hypotheses! 1 have been arguing for monogamy (sexual fidelity, actually) on the grounds that one's Partner is one's Mirror and that to preserve one's unity of self one must have a single Mirror. But now Strawson made me see that I'm speaking of the unity of a narrative self. Now it happens that both I and my Mirror have strongly narrative personalities, but my argument for monogamy won't go over for those with episodic personalities. (This does not rule out other arguments, but mine is one I haven't seen anyone else espouse.) So out goes my hypothesis! [On the other hand, my hypothesis about pluralism continues and is, in fact, reinforced by Strawson. His article starts out by embracing pluralism but ends up beating a drum for the episodic self as the one true way. Too bad about that. [I almost automatically evaluate women, not by their beauty or their status or even their intelligence, these things not so much, but rather how fit they would be to become my lifelong Mirror. But that's just me.] Department of Philosophy University of Reading Reading RG6 6AA email gstrawson at mac.com Abstract I argue against two popular claims. The first is a descriptive, empirical thesis about the nature of ordinary human experience: 'each of us constructs and lives a "narrative" ...this narrative is us, our identities' (Oliver Sacks); 'self is a perpetually rewritten story . . . in the end, we become the autobiographical narratives by which we "tell about" our lives' (Jerry Bruner); 'we are all virtuoso novelists. ...We try to make all of our material cohere into a single good story. And that story is our autobiography. The chief fictional character ... of that autobiography is one's self' (Dan Dennett). The second is a normative, ethical claim: we ought to live our lives narratively, or as a story; a 'basic condition of making sense of ourselves is that we grasp our lives in a narrative' and have an understanding of our lives 'as an unfolding story' (Charles Taylor). A person 'creates his identity [only] by forming an autobiographical narrative--a story of his life', and must be in possession of a full and 'explicit narrative [of his life] to develop fully as a person' (Marya Schechtman). -------------------------- Sec. 1. Talk of narrative is intensely fashionable in a wide variety of disciplines including philosophy, psychology, theology, anthropology, sociology, political theory, literary studies, religious studies, psychotherapy and even medicine. There is widespread agreement that human beings typically see or live or experience their lives as a narrative or story of some sort, or at least as a collection of stories. I'll call this the psychological Narrativity thesis, using the word 'Narrative' with a capital letter to denote a specifically psychological property or outlook. The psychological Narrativity thesis is a straightforwardly empirical, descriptive thesis about the way ordinary human beings actually experience their lives. This is how we are, it says, this is our nature. The psychological Narrativity thesis is often coupled with a normative thesis, which I'll call the ethical Narrativity thesis. This states that experiencing or conceiving one's life as a narrative is a good thing; a richly Narrative outlook is essential to a well-lived life, to true or full personhood. The descriptive thesis and the normative thesis have four main combinations. One may, to begin, think the descriptive thesis true and the normative one false. One may think that we are indeed deeply Narrative in our thinking and that it's not a good thing. The protagonist of Sartre's novel La naus?e holds something like this view.1 So do the Stoics, as far as I can see. n. 1 Sartre 1938. Second, and contrariwise, one may think the descriptive thesis false and the normative one true. One may grant that we are not all naturally Narrative in our thinking but insist that we should be, and need to be, in order to live a good life. There are versions of this view in Plutarch2 and a host of present-day writings. n. 2 See e.g. 100AD, pp. 214-7 (473B-474B). Third, one may think both theses are true: one may think that all normal non-pathological human beings are naturally Narrative and also that Narrativity is crucial to a good life. This is the dominant view in the academy today, followed by the second view. It does not entail that everything is as it should be; it leaves plenty of room for the idea that many of us would profit from being more Narrative than we are, and the idea that we can get our self- narratives wrong in one way or another. Finally, one may think that both theses are false. This is my view. I think the current widespread acceptance of the third view is regrettable. It's just not true that there is only one good way for human beings to experience their being in time. There are deeply non-Narrative people and there are good ways to live that are deeply non-Narrative. I think the second and third views hinder human self-understanding, close down important avenues of thought, impoverish our grasp of ethical possibilities, needlessly and wrongly distress those who do not fit their model, and are potentially destructive in psychotherapeutic contexts. Sec. 2 The first thing I want to put in place is a distinction between one's experience of oneself when one is considering oneself principally as a human being taken as a whole, and one's experience of oneself when one is considering oneself principally as an inner mental entity or 'self' of some sort--I'll call this one's self- experience. When Henry James says, of one of his early books, 'I think of ... the masterpiece in question ... as the work of quite another person than myself ...a rich ...relation, say, who . . . suffers me still to claim a shy fourth cousinship',3 he has no doubt that he is the same human being as the author of that book, but he does not feel he is the same self or person as the author of that book. It is this phenomenon of experiencing oneself as a self that concerns me here. One of the most important ways in which people tend to think of themselves (quite independently of religious belief) is as things whose persistence conditions are not obviously or automatically the same as the persistence conditions of a human being considered as a whole. Petrarch, Proust, Parfit and thousands of others have given this idea vivid expression. I'm going to take its viability for granted and set up another distinction--between 'Episodic' and 'Diachronic' self-experience--in terms of it. n. 3 1915: 562-3. Sec. 3 The basic form of Diachronic self-experience is that [D] one naturally figures oneself, considered as a self, as something that was there in the (further) past and will be there in the (further) future - something that has relatively long-term diachronic continuity, something that persists over a long stretch of time, perhaps for life. I take it that many people are naturally Diachronic, and that many who are Diachronic are also Narrative in their outlook on life. If one is Episodic, by contrast, [E] one does not figure oneself, considered as a self, as something that was there in the (further) past and will be there in the (further) future. One has little or no sense that the self that one is was there in the (further) past and will be there in the future, although one is perfectly well aware that one ha s long-term continuity considered as a whole human being. Episodics are likely to have no particular tendency to see their life in Narrative terms.4 n.4 The Episodic/Diachronic distinction is not the same thing as the Narrative/non- Narrative distinction, as will emerge; but there are marked correlations between them. The Episodic and Diachronic styles of temporal being are radically opposed, but they are not absolute or exceptionless. Predominantly Episodic individuals may sometimes connect to charged events in their pasts in such a way that they feel that those events happened to them--embarrassing memories are a good example--and anticipate events in their futures in such a way that they think that those events are going to happen to them-- thoughts of future death can be a good example. So too predominantly Diachronic individuals may sometimes experience an Episodic lack of linkage with well remembered parts of their past. It may be that the basic Episodic disposition is less common in human beings than the basic Diachronic disposition, but many factors may induce variations in individuals. I take it that the fundamentals of temporal temperament are genetically determined, and that we have here to do with a deep 'individual difference variable', to put it in the language of experimental psychology. Individual variation in time-style, Episodic or Diachronic, Narrative or non-Narrative, will be found across all cultures, so that the same general spread will be found in a so-called 'revenge culture', with its essentially Diachronic emphasis, as in a more happy- go-lucky culture.5 Compatibly with that, one's exact position in Episodic/Diachronic/Narrative/non-Narrative state-space may vary significantly over time according to what one is doing or thinking about, one's state of health, and so on; and it may change markedly with increasing age. n. 5 Although a culture could in theory exert significant selective pressure on a psychological trait. For descriptions of revenge culture see Blumenfeld 2003. Diachronics and Episodics are likely to misunderstand one another badly. Diachronics may feel that there is something chilling, empty and deficient about the Episodic life. They may fear it, although it is no less full or emotionally articulated than the Diachronic life, no less thoughtful or sensitive, no less open to friendship, love and loyalty. And certainly the two forms of life differ significantly in their ethical and emotional form. But it would be a great mistake to think that the Episodic life is bound to be less vital or in some way less engaged, or less humane, or less humanly fulfilled. If Heideggerians think that Episodics are necessarily 'inauthentic' in their experience of being in time, so much the worse for their notion of authenticity.6 And if Episodics are moved to respond by casting aspersions on the Diachronic life--finding it somehow macerated or clogged, say, or excessively self-concerned, inauthentically second-order--they too will be mistaken if they think it an essentially inferior form of human life. n. 6 Cf. e.g. Heidegger 1927. There is one sense in which Episodics are by definition more located in the present than Diachronics, so far as their self-experience is concerned, but it does not follow, and is not true, that Diachronics are less present in the present moment than Episodics, any more than it follows, or is true, that in the Episodic life the present is somehow less informed by or responsible to the past than it is in the Diachronic life. What is true is that the informing and the responsiveness have different characteristics and different experiential consequences in the two cases. Faced with sceptical Diachronics, who insist that Episodics are (essentially) dysfunctional in the way they relate to their own past, Episodics will reply that the past can be present or alive in the present without being present or alive as the past. The past can be alive - arguably more genuinely alive--in the present simply in so far as it has helped to shape the way one is in the present, just as musicians' playing can incorporate and body forth their past practice without being mediated by any explicit memory of it. What goes for musical development goes equally for ethical development, and Rilke's remarks on poetry and memory, which have a natural application to the ethical case, suggest one way in which the Episodic attitude to the past may have an advantage over the Diachronic: 'For the sake of a single poem', he writes, 'you must have ... many ...memories. ... And yet it is not enough to have memories. ... For the memories themselves are not important.' They give rise to a good poem 'only when they have changed into our very blood, into glance and gesture, and are nameless, no longer to be distinguished from ourselves.'7 n. 7 Among those whose writings show them to be markedly Episodic I propose Michel de Montaigne, the Earl of Shaftesbury, Stendhal, Hazlitt, Ford Madox Ford, Virginia Woolf, Borges, Fernando Pessoa, Iris Murdoch (a strongly Episodic person who is a natural story teller), Freddie Ayer, Goronwy Rees, Bob Dylan. Proust is another candidate, in spite of his memoriousness (which may be inspired by his Episodicity); also Emily Dickinson. On the other side--to begin with--Plato, St. Augustine, Heidegger, Tom Nagel, probably Nietzsche, all the champions of of narrative and Narrativity in the current ethicopsychological debate, and some of my closest friends. Sec. 4 How do Episodicity and Diachronicity relate to Narrativity? Suppose that being Diachronic is at least necessary for being Narrative. Since it's true by definition that if you're Diachronic you're not Episodic and conversely, it follows that if you're Episodic you're not Narrative. But I think that the strongly Episodic life is one normal, non-pathological form of life for human beings, and indeed one good form of life for human beings, one way to flourish. So I reject both the psychological Narrativity thesis and the normative, ethical Narrativity thesis. I need to say more about the Episodic life, and since I find myself to be relatively Episodic, I'll use myself as an example. I have a past, like any human being, and I know perfectly well that I have a past. I have a respectable amount of factual knowledge about it, and I also remember some of my past experiences 'from the inside', as philosophers say. And yet I have absolutely no sense of my life as a narrative with form, or indeed as a narrative without form. Absolutely none. Nor do I have any great or special interest in my past. Nor do I have a great deal of concern for my future. That's one way to put it--to speak in terms of limited interest. Another way is to say that it seems clear to me, when I am experiencing or apprehending myself as a self, that the remoter past or future in question is not my past or future, although it is certainly the past or future of GS the human being. This is more dramatic, but I think it is equally correct, when I am figuring myself as a self. I have no significant sense that I--the I now considering this question--was there in the further past. And it seems clear to me that this is not a failure of feeling. It is, rather, a registration of a fact about what I am--about what the thing that is currently considering this problem is. I will use 'I*' to represent: that which I now experience myself to be when I'm apprehending myself specifically as an inner mental presence or self. 'I*' comes with a large family of cognate forms--'me*', 'my*', 'you*' 'oneself *', 'themselves*', and so on. The metaphysical presumption built into these terms is that they succeed in making genuine reference to an inner mental something that is reasonably called a 'self'. But it doesn't matter whether or not the presumption is correct.8 n. 8 The term 'I*' and its cognates can function in phenomenological contexts to convey the content of a form of experience that incorporates the presumption whether or not the presumption is actually correct. I'll omit the '*' when it's not necessary. So: it's clear to me that events in my remoter past didn't happen to me*. But what does this amount to? It certainly doesn't mean that I don't have any autobiographical memories of these past experiences. I do. Nor does it mean that my autobiographical memories don't have what philosophers call a 'from-the-inside' character. Some of them do. And they are certainly the experiences of the human being that I am. It does not, however, follow from this that I experience them as having happened to me*, or indeed that they did happen to me*. They certainly do not present as things that happened to me*, and I think I'm strictly, literally correct in thinking that they did not happen to me*. - That can't be right. If one of my remembered experiences has a from-theinside character it must--by definition--be experienced as something that happened to me*. This may seem plausible at first, but it's a mistake: the from-theinside character of a memory can detach completely from any sense that one is the subject of the remembered experience. My memory of falling out of a boat has an essentially from-the-inside character, visually (the water rushing up to meet me), kinaesthetically, proprioceptively, and so on.9 It certainly does not follow that it carries any feeling or belief that what is remembered happened to me*, to that which I now apprehend myself to be when I am apprehending myself specifically as a self. n. 9 It does not have any sort of 'from-the-outside' character (that would be a bit like my seeing a film of myself falling taken by a third party). This doesn't follow even when emotion figures in the from-theinside character of the autobiographical memory. The inference from [1] The memory has a from-the-inside character in emotional respects to [2] The memory is experienced as something that happened to me* is simply not valid, although for many people [1] and [2] are often or usually true together. For me this is a plain fact of experience. I'm well aware that my past is mine in so far as I am a human being, and I fully accept that there's a sense in which it has special relevance to me* now, including special emotional and moral relevance. At the same time I have no sense that I* was there in the past, and think it obvious that I* was not there, as a matter of metaphysical fact. As for my practical concern for my future, which I believe to be within the normal human range (low end), it is biologically-- viscerally--grounded and autonomous in such a way that I can experience it as something immediately felt even though I have no significant sense that I* will be there in the future. Sec. 5 So much, briefly, for the Episodic life. What about the Narrative life? And what might it mean to say that human life is 'narrative' in nature? And must you be Diachronic to be Narrative? There are many questions. One clear statement of the psychological Narrativity thesis is given by Roquentin in Sartre's novel La naus?e: "a man is always a teller of stories, he lives surrounded by his own stories and those of other people, he sees everything that happens to him in terms of these stories and he tries to live his life as if he were recounting it."10 n. 10 1938, p. 64. Sartre is as much concerned with relatively short-term passages of life as with life as a whole. Sartre sees the narrative, story-telling impulse as a defect, regrettable. He accepts the psychological Narrativity thesis while rejecting the ethical Narrativity thesis. He thinks human Narrativity is essentially a matter of bad faith, of radical (and typically irremediable) inauthenticity, rather than as something essential for authenticity. The pro-Narrative majority may concede to Sartre that Narrativity can go wrong while insisting that it's not all bad and that it is necessary for a good life. I'm with Sartre on the ethical issue, but I want now to consider some statements of the psychological Narrativity thesis. Oliver Sacks puts it by saying that 'each of us constructs and lives a "narrative" ...this narrative is us, our identities'. The distinguished psychologist Jerry Bruner writes of 'the stories we tell about our lives', claiming that 'self is a perpetually rewritten story', and that 'in the end, we become the autobiographical narratives by which we "tell about" our lives'.11 Dan Dennett claims that n. 11 Sacks 1985, p. 110; Bruner 1987, pp. 11, 15, 12; 1994, p. 53. "we are all virtuoso novelists, who find ourselves engaged in all sorts of behaviour, and we always try to put the best 'faces' on it we can. We try to make all of our material cohere into a single good story. And that story is our autobiography. The chief fictional character at the centre of that autobiography is one's self."12 n. 12 Dennett 1988, p. 1029. Marya Schechtman goes further, twisting the ethical and the psychological Narrativity theses tightly together in a valuably forthright manner. A person, she says, 'creates his identity [only] by forming an autobiographical narrative--a story of his life'. One must be in possession of a full and 'explicit narrative [of one's life] to develop fully as a person'.13 n. 13 Schechtman 1997, pp. 93, 119. Charles Taylor presents it this way: a 'basic condition of making sense of ourselves', he says, 'is that we grasp our lives in a narrative' and have an understanding of our lives 'as an unfolding story'. This is not, he thinks, 'an optional extra'; our lives exist 'in a space of questions, which only a coherent narrative can answer'.14 He is backed up by Claire in Doug Copeland's novel Generation X: 'Claire ...breaks the silence by saying that it's not healthy to live life as a succession of isolated little cool moments. "Either our lives become stories, or there's no way to get through them"'; but Taylor builds a lot more ethical weight into what's involved in getting through life. It is n. 14 1989, pp. 47, 52. "because we cannot but orient ourselves to the good, and hence determine our place relative to it and hence determine the direction of our lives, [that] we must inescapably understand our lives in narrative form, as a 'quest' [and] must see our lives in story."15 n. 15 1989, pp. 51-2. I reject the 'because' and the second 'hence'. This, he says, is an 'inescapable structural requirement of human agency',16 and Paul Ricoeur appears to concur: n. 16 1989, p. 52. "How, indeed, could a subject of action give an ethical character to his or her own life taken as a whole if this life were not gathered together in some way, and how could this occur if not, precisely, in the form of a narrative?"17 n. 17 1990, p. 158. Here my main puzzlement is about what it might be to 'give an ethical character to [one's] own life taken as a whole' in some explicit way, and about why on earth, in the midst of the beauty of being, it should be thought to be important to do this. I think that those who think in this way are motivated by a sense of their own importance or significance that is absent in other human beings. Many of them, connectedly, have religious commitments. They are wrapped up in forms of religious belief that are--like almost all religious belief--really all about self.18 n. 18 Religious belief is one of the fundamental vehicles of human narcissism (clearly a sense of one's own importance is much more likely to be the cause of religious belief in someone who has come to religion than in someone who has been born into it). Alasdair MacIntyre is perhaps the founding figure in the modern Narrativity camp, and his view is similar to Taylor's. 'The unity of an individual life', he says, 'is the unity of a narrative embodied in a single life. To ask "What is the good for me?" is to ask how best I might live out that unity and bring it to completion....' The unity of a human life, he continues, "is the unity of a narrative quest ... [and] the only criteria for success or failure in a human life as a whole are the criteria for success or failure in a narrated or to-be-narrated quest. ...A quest for what? ...a quest for the good ... the good life for man is the life spent in seeking for the good life for man."19 n. 19 1981, pp. 203-4. MacIntyre's claim seems at first non-psychological: a good life is one that has narrative unity. But a good life is one spent seeking the good life, and there is a strong suggestion that seeking the good life requires taking up a Narrative perspective; in which case narrative unity requires Narrativity. Is any of this true? I don't think so. It seems to me that Mac- Intyre, Taylor and all other supporters of the ethical Narrativity thesis are really just talking about themselves. It may be that what they are saying is true for them, both psychologically and ethically. This may be the best ethical project that people like themselves can hope to engage in.20 But even if it is true for them it is not true for other types of ethical personality, and many are likely to be thrown right off their own truth by being led to believe that Narrativity is necessary for a good life. My own conviction is that the best lives almost never involve this kind of self-telling, and that we have here yet another deep divider of the human race. n. 20 One problem with it, and it is a deep problem, is that one is almost certain to get one's 'story' wrong, in some more or less sentimental way--unless, perhaps, one has the help of a truly gifted therapist. When a Narrative like John Campbell claims that 'identity [through time] is central to what we care about in our lives: one thing I care about is what I have made of my life'21, I'm as bewildered as Goronwy Rees when he writes n. 21 1994, p. 190. "For as long as I can remember it has always surprised and slightly bewildered me that other people should take it so much for granted that they each possess what is usually called 'a character'; that is to say, a personality [or personality-possessing self] with its own continuous history....I have never been able to find anything of that sort in myself. ... How much I admire those writers who are actually able to record the growth of what they call their personality, describe the conditions which determined its birth, lovingly trace the curve of its development. ... For myself it would be quite impossible to tell such a story, because at no time in my life have I had that enviable sensation of constituting a continuous personality....As a child this did not worry me, and if indeed I had known at that time of Der Mann ohne Eigenschaften [The Man without Qualities, a novel by Robert Musil], the man without qualities, I would have greeted him as my blood brother and rejoiced because I was not alone in the world; as it was, I was content with a private fantasy of my own in which I figured as Mr. Nobody."22 n. 22 1960, pp. 9-10. Unlike Rees, I have a perfectly good grasp of myself as having a certain personality, but I'm completely uninterested in the answer to the question 'What has GS made of his life?', or 'What have I made of my life?'. I'm living it, and this sort of thinking about it is no part of it. This does not mean that I am in any way irresponsible. It is just that what I care about, in so far as I care about myself and my life, is how I am now. The way I am now is profoundly shaped by my past, but it is only the present shaping consequences of the past that matter, not the past as such. I agree with the Earl of Shaftesbury: The metaphysicians ...affirm that if memory be taken away, the self is lost. [But] what matter for memory? What have I to do with that part? If, whilst I am, I am as I should be, what do I care more? And thus let me lose self every hour, and be twenty successive selfs, or new selfs, 'tis all one to me: so [long as] I lose not my opinion [i.e. my overall outlook, my character, my moral identity]. If I carry that with me 'tis I; all is well. ...- The now; the now. Mind this: in this is all.23 n. 23 Shaftesbury 1698-1712, pp. 136-137. Epictetus is an important influence. I think, then, that the ethical Narrativity thesis is false, and that the psychological Narrativity thesis is also false in any non-trivial version. What do I mean by non-trivial? Well, if someone says, as some do, that making coffee is a narrative that involves Narrativity, because you have to think ahead, do things in the right order, and so on, and that everyday life involves many such narratives, then I take it the claim is trivial.24 n. 24 Taylor is explicit that it is when I am not 'dealing with such trivial questions as where I shall go in the next five minutes but with the issue of my place relative to the good', that 'making sense of my present action ... requires a narrative understanding of my life' (1989, p. 48). Is there some burden on me to explain the popularity of the two theses, given that I think that they're false? Hardly. Theorizing human beings tend to favour false views in matters of this kind. I do, though, think that intellectual fashion is part of the explanation. I also suspect that those who are drawn to write on the subject of 'narrativity' tend to have strongly Diachronic and Narrative outlooks or personalities, and generalize from their own case with that special, fabulously misplaced confidence that people feel when, considering elements of their own experience that are existentially fundamental for them, they take it that they must also be fundamental for everyone else.25 n. 25 I think this may be the greatest single source of unhappiness in human intercourse. Sec. 6. --All very interesting, but what exactly is (upper-case) Narrativity? You still haven't addressed the question directly, and you're running out of time. Perhaps the first thing to say is that being Diachronic doesn't already entail being Narrative. There must be something more to experiencing one's life as a narrative than simply being Diachronic. For one can be Diachronic, naturally experiencing oneself(*) as something existing in the past and future without any particular sense of one's life as constituting a narrative. --Fine, but you haven't told me what a (lower-case) narrative is either. Well, the paradigm of a narrative is a conventional story told in words. I take the term to attribute--at the very least--a certain sort of developmental and hence temporal unity or coherence to the things to which it is standardly applied--lives, parts of lives, pieces of writing. So it doesn't apply to random or radically unconnected sequences of events even when they are sequentially and indeed contiguously temporally ordered, or to purely picaresque or randomly 'cut-up' pieces of writing.26 n. 26 There are, however, many interesting complications. See Life in Time. --This doesn't take us very far, because we still need to know what makes developmental unity or coherence in a life specifically narrative in nature. After all, there's a clear sense in which every human life is a developmental unity--a historical-characteral developmental unity as well as a biological one--just in being the life of a single human being. Putting aside cases of extreme insanity, any human life, even a highly disordered one, can be the subject of an outstanding biography that possesses all the narrative-unity-related virtues of that literary form. But if this sort of developmental unity is sufficient for narrative structure in the sense of the narrativity thesis, then the thesis is trivially true of all human beings. Actually, even dogs and horses can be the subject of excellent biographies. True. And this, I think, is why the distinctive claim of the defenders of the psychological Narrativity thesis is that for a life to be a narrative in the required sense it must be lived Narratively. The person whose life it is must see or feel it as a narrative, construe it as a narrative, live it as a narrative. One could put this roughly by saying that lower-case or 'objective' narrativity requires uppercase or 'subjective' Narrativity.27 n. 27 MacIntyre does not in the passages I have quoted explicitly say that the narrativity of a life requires Narrativity. In After Virtue he is particularly concerned with the idea that 'to think of a human life as a narrative unity is to think in a way alien to the dominant individualist and bureaucratic modes of modern culture' (1981, p. 211), and this remark was principally a criticism--an excellent one--of the social sciences of the time. --Now you're using the notion of upper-case psychological Narrativity to characterize the notion of lower-case 'objective' narrativity, and I still don't have a clear sense of what upper-case Narrativity is. Well, it's not easy, but perhaps one can start from the idea of a construction in the sense of a construal. The Narrative outlook clearly involves putting some sort of construction--a unifying or form-finding construction--on the events of one's life, or parts of one's life. I don't think this construction need involve any clearly intentional activity, nor any departure from or addition to the facts. But the Narrative attitude must (as we have already agreed) amount to something more than a disposition to grasp one's life as a unity simply in so far as it is the life of a biologically single human being. Nor can it consist just in the ability to give a sequential record of the actual course of one's life -, the actual history of one's life--even if one's life does in fact exemplify a classical pattern of narrative development independently of any construction or interpretation. One must in addition engage--to repeat--in some sort of construal of one's life. One must have some sort of relatively large-scale coherence-seeking, unity- seeking, pattern-seeking, or most generally [F] form-finding tendency when it comes to one's apprehension of one's life, or relatively large-scale parts of one's life.28 n. 28 From now on I will omit the qualification about 'parts of one's life' and take it as read --But this doesn't even distinguish Narrativity from Diachronicity, for to be Diachronic is already to put a certain construction on one's life--on the life of the human being that one is: it is to apprehend that life through the life-unifying sense that one(*) was there in the past and will be there in the future. And yet you say being Diachronic is not enough for being Narrative. I'm prepared to allow that to be Diachronic is already to put a certain construction on one's life in the sense you specify. Nevertheless one can be Diachronic without actively conceiving of one's life, consciously or unconsciously, as some sort of ethical- historical-characterological developmental unity, or in terms of a story, a Bildung or 'quest'. One can be Diachronic without one's sense of who or what one is having any significant sort of narrative structure. And one can be Diachronic without one's apprehension of oneself as something that persists in time having any great importance for one.29 n. 29 'Discern', 'apprehend', 'find', 'detect' all have non-factive readings. --You've already said that, and the question remains unanswered: what sort of construal is required for Narrativity? When does one cross the line from mere Diachronicity to Narrativity? This is still luminously unclear. I agree that the proposal that form-finding is a necessary condition of Narrativity is very unspecific, but its lack of specificity may be part of its value, and it seems clear to me that Diachronicity (D) and form-finding (F) are independent of each other. In practice, no doubt, they often come together, but one can imagine [-D +F] an Episodic person in whom a form-finding tendency is stimulated precisely by lack of a Diachronic outlook, and, conversely, [+D -F] a Diachronic person who lives, by force of circumstance, an intensely picaresque and disjointed life, while having absolutely no tendency to seek unity or narrative- developmental pattern in it. Other Diachronics in similar circumstances may move from [+D -F] to [+D +F], acquiring a form-finding tendency precisely because they become distressed by the 'one damned thing after another'30 character of their lives. The great and radically non-Narrative Stendhal might be judged to be an example of this, in the light of all his chaotic autobiographical projects, although I would be more inclined to classify him as [-D +F].31 Either way, the fact remains that one can be Diachronic while being very unreflective about oneself. One can be inclined to think, of any event in one's past of which one is reminded, that it happened to oneself *, without positively grasping one's life as a unity in any further--e.g. specifically narrative - sense. n. 30 Hubbard 1909, p. 32. n. 31 I judge Stendhal to be strongly Episodic but subject to Diachronic flashes. Jack Kerouac is I think a clear case of an Episodic looking for larger form. There are also clear elements of this in Malcolm Lowry. I think that the notion of form-finding captures something that is essential to being Narrative and that goes essentially beyond being Diachronic, and one view might be that form-finding is not only necessary for Narrativity, but also minimally sufficient. Against that, it may be said that if one is genuinely Narrative one must also (and of course) have some sort of distinctive [S] story-telling tendency when it comes to one's apprehension of one's life--where storytelling is understood in such a way that it does not imply any tendency to fabrication, conscious or otherwise, although it does not exclude it either. On this view, one must be disposed to apprehend or think of oneself and one's life as fitting the form of some recognized narrative genre. Story-telling is a species of form-finding, and the basic model for it, perhaps, is the way in which gifted and impartial journalists or historians report a sequence of events. Obviously they select among the facts, but they do not, we suppose, distort or falsify them, and they do more than merely list them in the correct temporal order, for they also place them in a connected account. In its non-falsifying mode story-telling involves the ability to detect-- not invent--developmental coherencies in the manifold of one's life. It is one way in which one may be able to apprehend the deep personal constancies that do in fact exist in the life of every human being--although I believe this can also be done by form- finding without story-telling. So story-telling entails form-finding, and story-telling in addition to form-finding is surely--trivially--sufficient for Narrativity. Sec. 8 A third and more troubling suggestion is that if one is Narrative one will also have a tendency to engage unconsciously in invention, fiction of some sort--falsification, confabulation, revisionism--when it comes to one's apprehension of one's own life. I will call this [R] revision. According to the revision thesis Narrativity always carries with it some sort of tendency to revision, where revision essentially involves more merely than changing one's view of the facts of one's life. (One can change one's view of the facts of one's life without any falsification, simply by coming to see things more clearly.) Revision in the present sense is by definition non-conscious. It may sometimes begin consciously, with deliberate lies told to others, for example, and it may have semi-conscious instars, but it is not genuine revision in the present sense unless or until its products are felt to be true in a way that excludes awareness of falsification.32 The conscious/non-conscious border is both murky and porous, but I think the notion of revision is robust for all that. The paradigm cases are clear, and extremely common. n. 32 It's well known that fully conscious lies can forget their origins and come to be fully believed by their perpetrators. If the revision thesis were true, it would be bad news for the ethical Narrativity thesis, whose supporters cannot want ethical success to depend essentially on some sort of falsification. I have no doubt that almost all human Narrativity is compromised by revision, but I don't think it must be. It is in any case a vast and complex phenomenon, and I will make just a very few remarks. It is often said that autobiographical memory is an essentially constructive and reconstructive phenomenon (in the terms of experimental psychology) rather than a merely reproductive one, and there is a clear sense in which this is true.33 Memory deletes, abridges, edits, reorders, italicizes. But even if construction and reconstruction are universal in autobiographical memory, they needn't involve revision as currently defined, for they may be fabrication-free story-telling or form-finding. Many have proposed that we are all without exception incorrigible self-fabulists, 'unreliable narrators' of our own lives,34 and some who hold this view claim greater honesty of outlook for themselves, and see pride, self-blindness, and so on in those who deny it. But other research makes it pretty clear that this is not true. It's not true of everyone. We have here another deep dimension of human psychological difference. Some people are fabulists all the way down. In others, autobiographical memory is fundamentally non-distorting, whatever automatic processes of remoulding and recasting it may invariably involve.35 n. 33 For good discussions, see e.g. Brewer 1988, McCauley 1988. n. 34 Cf. e.g. Bruner 1987, 1990, 1994. The notion of an 'unreliable narrator' derives from literary criticism. In The Minds' Past (1998a) Gazzaniga seems to support a strongly reconstructive view of human memory, but he later says only that personal memory tends to be 'a bit fictional' (1998b, p. 713). n. 35 Brewer (1988) argues that the evidence that supports 'the reconstructive view of personal memory . . . does not seem very compelling'. See also Wagenaar 1994, Baddeley 1994, p. 239, Swann 1990. Ross (1989) argues that revision that seems to serve self-esteem may be motivated by nothing more than a concern for consistency. Some think that revision is always charged, as I will say--always motivated by an interconnected core group of moral emotions including pride, self-love, conceit, shame, regret, remorse, and guilt. Some go further, claiming with Nietzsche that we always revise in our own favour: '"I have done that", says my memory. "I cannot have done that", says my pride, and remains inexorable. Eventually--memory yields.'36 n. 36 1886, ?69. It seems, however, that neither of these claims is true. The first, that all revision is charged, is significantly improved by the inclusion of things like modesty or low self-esteem, gratitude or forgiveness, in the core group of motivating moods and emotions; some people are just as likely to revise to their own detriment and to others' advantage as the other way round. But the claim that revision is always charged remains false even so. Revision may occur simply because one is a natural form-finder but a very forgetful one and instinctively seeks to make a coherent story out of limited materials.37 Frustrated story-tellers may fall into revision simply because they can't find satisfying form in their lives and without being in any way motivated by a wish to preserve or restore self-respect. John Dean's recall of his conversations with Nixon at the Watergate hearings is another much discussed case of uncharged revision. When the missing tapes were found, his testimony was revealed to be impressively 'accurate about the individuals' basic positions' although it was 'inaccurate with respect to exactly what was said during a given conversation'. His recall of events involved revision in addition to routine forgetting and morally neutral reconstruction, in so far as it contained positive mistakes, but there is no reason to think that it was significantly charged.38 'Flashbulb' memories (such as the memory of what was one doing when one heard about the shooting of President Kennedy or about 9/11) can be surprisingly inaccurate n. 37 Perhaps 'confabulation' in patients with Korsakov's syndrome is an extreme and pathological example of revision. See e.g. Sacks 1985, Gazzaniga 1998. n. 38 Brewer 1988, p. 27. Cf. Neisser 1981. - astonishingly so given our certainty that we remember accurately--but once again there seems no reason to think that the revision that they involve must be charged.39 Even when revision is charged, the common view that we always revise in our own favour must yield to a mass of everyday evidence that some people are as likely to revise to their own detriment-- or simply forget the good things they have done.40 When La Rochefoucauld says that self-love is subtler than the subtlest man in the world, there is truth in what he says. And revising to one's own detriment may be no more attractive than revising to one's advantage. But La Rochefoucauld is sometimes too clever, or rather ignorant, in his cynicism.41 n. 39 Pillemer 1998, ch. 2. n. 40 For more formal evidence, cf. e.g. Wagenaar 1994, 'Is memory self-serving?'. n. 41 Even if we did all tend to see our lives in a favourable light, it would not follow that we were all revisers: some will have self-favouring, self-respect-preserving justifications of their actions already in place at the time of action, and so have no need for subsequent revision. Is a tendency to revise a necessary part of being Narrative? No. In our own frail case, substantial Narrativity may rarely occur without revision, but story-telling is sufficient for Narrativity, and one can be story-telling without being revisionary. So the ethical Narrativity thesis survives the threat posed by the revision thesis. When Bernard Malamud claims that 'all biography is ultimately fiction', simply on the grounds that 'there is no life that can be captured wholly, as it was', there is no implication that it must also be ultimately untrue.42 n. 42 Malamud 1979. Sec. 9 I've made some distinctions, but none of them cut very sharply, and if one asks how Diachronics, form-finders, story-tellers, and revisers relate to each other, the answer, as far as I can see, is that almost anything goes. Story-telling entails form-finding because it is simply one kind of form-finding, but I see no other necessary connections between the four. Some think that all normal human beings have all four of these properties. I think that some normal human beings have none of them. Some think that Narrativity necessarily involves all four. I think (as just remarked) that the limiting case of Narrativity involves nothing more than form- finding story-telling (it does not even require one to be Diachronic). How do the authors I've quoted classify under this scheme? Well, Dennett is someone who endorses a full blown [+D +F +S +R] view of what it is to be Narrative, and he seems to place considerable emphasis on revision: "our fundamental tactic of self-protection, self-control, and self- definition is not spinning webs or building dams, but telling stories, and more particularly concocting and controlling the story we tell others--and ourselves--about who we are."43 n. 43 1991, p. 418; my emphasis. Note that Dennett stresses the idea that this is a story about who we are, rather than about our lives. Bruner, I think, concurs with this emphasis. I take it that Sartre endorses [+F +S +R], and is not particularly concerned with [D] in so far as he is mainly interested in short-term, in-the-present story-telling. Schechtman's account of Narrativity is [+D +F +S ?R]. It assumes that we are all Diachronic, requires that we be form-finding and story-telling and explicitly so: constituting an identity requires that an individual conceive of his life as having the form and the logic of a story--more specifically, the story of a person's life--where "story" is understood as a conventional, linear narrative44 n. 44 Schechtman 1997, p. 96. This is a strong expression of her view, which has usefully weaker forms (cf. e.g. pp. 117, 159). but it is important, on her view, that there be no significant revision, that one's self-narrative be essentially accurate. I take myself to be [-D -F -S -R]. The claim that I don't revise much is the most vulnerable one, because it is in the nature of the case that one has no sense that one revises when one does. So I may be wrong, but (of course) I don't think so. On the strong form of Schechtman's view, I am not really a person. Some sentient creatures, she says 'weave stories of their lives, and it is their doing so which makes them persons'; to have an 'identity' as a person is 'to have a narrative self-conception ... to experience the events in one's life as interpreted through one's sense of one's own life story'. This is in fact a common type of claim, and Schechtman goes further, claiming at one point that 'elements of a person's narrative' that figure only in his 'implicit self-narrative', and that 'he cannot articulate ... are only partially his--attributable to him to a lesser degree than those aspects of the narrative he can articulate'.45 n. 45 1997, p. 117. This seems to me to express an ideal of control and self- awareness in human life that is mistaken and potentially pernicious. The aspiration to explicit Narrative self-articulation is natural for some--for some, perhaps, it may even be helpful-- but in others it is highly unnatural and ruinous. My guess is that it almost always does more harm than good--that the Narrative tendency to look for story or narrative coherence in one's life is, in general, a gross hindrance to self-understanding: to a just, general, practically real sense, implicit or explicit, of one's nature. It's well known that telling and retelling one's past leads to changes, smoothings, enhancements, shifts away from the facts, and recent research has shown that this is not just a human psychological foible. It turns out to be an inevitable consequence of the mechanics of the neurophysiological process of laying down memories that every studied conscious recall of past events brings an alteration.46 The implication is plain: the more you recall, retell, narrate yourself, the further you risk moving away from accurate self-understanding, from the truth of your being. Some are constantly telling their daily experiences to others in a storying way and with great gusto. They are drifting ever further off the truth. Others never do this, and when they are obliged to convey facts about their lives they do it clumsily and uncomfortably and in a way that is somehow essentially narrative-resistant. n. 46 See McCrone 2003, Debiec, LeDoux, & Nader 2002. Certainly Narrativity is not a necessary part of the 'examined life' (nor is Diachronicity), and it is in any case most unclear that the examined life, thought by Socrates to be essential to human existence, is always a good thing. People can develop and deepen in valuable ways without any sort of explicit, specifically Narrative reflection, just as musicians can improve by practice sessions without recalling those sessions. The business of living well is, for many, a completely non-Narrative project. Granted that certain sorts of self-understanding are necessary for a good human life, they need involve nothing more than form-finding, which can exist in the absence of Narrativity; and they may be osmotic, systemic, not staged in consciousness. Psychotherapy need not be a narrative or Narrative project. It regularly involves identifying connections between features of one's very early life and one's present perspective on things, but these particular explanatory linkings need not have any sort of distinctively narrative character to them. Nor need they be grasped in any distinctively Narrative way. Nor need they interconnect narratively with each other in any interesting way. I don't need to take up any sort of Narrative attitude to myself in order to profit from coming to understand how the way X and Y treated me when I was very young is expressed in certain anxieties I have now. The key explanatory linkings in psychotherapy are often piecemeal in nature, as are many of the key impacts of experience. Ideally, I think, one acquires an assorted basketful of understandings, not a narrative--an almost inevitably falsifying narrative. 10--I'm sorry, but you really have no idea of the force and reach of the psychological Narrativity thesis. You're as Narrative as anyone else, and your narratives about yourself determine how you think of yourself even though they are not conscious. Well, here we have a stand off. I think it's just not so, and I take it that the disagreement is not just terminological. Self- understanding does not have to take a narrative form, even implicitly. I'm a product of my past, including my very early past, in many profoundly important respects. But it simply does not follow that self-understanding, or the best kind of self understanding, must take a narrative form, or indeed a historical form. If I were charged to make my self-understanding explicit, I might well illustrate my view of myself by reference to things I (GS) have done, but it certainly would not follow that I had a Diachronic outlook, still less a Narrative one. At this point Heidegger informs us, in a variation on Socrates, that a human being's existence--'Dasein's' existence--is constituted by the fact that its being is an issue for it. Fine, but it's not at all clear that being a thing whose being is an issue for it need involve any sort of Narrative outlook. Heidegger takes it that one's 'self-understanding is constitutive of [one's] ... being what or who [one] is', and that this self-understanding consists largely in one's 'determining oneself as someone by pressing ahead into a possible way to be'.47 And here he seems (I do not understand his notion of temporality) to be insisting on the importance of being Diachronic and indeed Narrative. But if this is his claim then--once again--it seems to me false: false as a universal claim about human life, false as a claim about what it is for human beings to be what or who they are, false as a normative claim about what good or authentic human life must be like, false about what any self-understanding must involve, and false about what self-understanding is at its best. Perhaps Heideggerian authenticity is compatible with the seemingly rival ideal of living in the moment--'Take therefore no thought for the morrow: for the morrow shall take thought for the things of itself. Sufficient unto the day is the evil thereof'48--but this will not win me over. n. 47 Blattner 1999, pp. 32, 41; I substitute 'one' for 'Dasein'. Cf. Heidegger (1927, p. 344): 'In the light of the "for-the-sake-of-which" of one's self-chosen ability-to-be, resolute Dasein frees itself for its world.' n. 48 Matthew vi. 34. This way of being in the present has nothing to do with the 'aesthetic' way of being in the present described and condemned by Kierkegaard. Sec. 11 There is much more to say. Some may still think that the Episodic life must be deprived in some way, but truly happy-golucky, see-what-comes-along lives are among the best there are, vivid, blessed, profound.49 Some think that an Episodic cannot really know true friendship, or even be loyal. They are refuted by Michel de Montaigne, a great Episodic, famous for his friendship with Etienne de la Bo?tie, who judged that he was 'better at friendship than at anything else' although n. 49 Note, though, how Tom Bombadil in The Lord of the Rings can produce a certain anxiety. "there is nobody less suited than I am to start talking about memory. I can find hardly a trace of it in myself; I doubt if there is any other memory in the world as grotesquely faulty as mine is!"50 n. 50 1563-92, p. 32. Montaigne finds that he is often misjudged and misunderstood, for when he admits he has a very poor memory people assume that he must suffer from ingratitude: 'they judge my affection by my memory', he comments, and are of course quite wrong to do so.51 A gift for friendship doesn't require any ability to recall past shared experiences in detail, nor any tendency to value them. It is shown in how one is in the present. n. 51 p. 33. 'A second avantage' of poor memory, he goes on to note, 'is that ...I remember less any insults received'. But can Episodics be properly moral beings? The question troubles many. Kathy Wilkes thinks not.52 So also, perhaps, do Plutarch and many others. But Diachronicity is not a necessary condition of a properly moral existence, nor of a proper sense of responsibility.53 As for Narrativity, it is in the sphere of ethics more of an affliction or a bad habit than a prerequisite of a good life. It risks a strange commodification of life and time--of soul, understood in a strictly secular sense. It misses the point. 'We live', as the great short story writer V. S. Pritchett observes, 'beyond any tale that we happen to enact'.54 n. 52 Wilkes 1999. n. 53 I discuss Episodic ethics in Life in Time. n. 54 Pritchett 1979, p. 47. I am grateful to audiences in Oxford (1999), Rutgers (2000), and Reading (2003) for their comments. References Baddeley, A. (1994). 'The remembered self and the enacted self', in The remembering self: construction and accuracy in the self-narrative, edited by U. Neisser & R. Fivush (Cambridge: Cambridge University Press). Blattner, W. (1999). Heidegger's Temporal Idealism (Cambridge: Cambridge University Press). Blumenfeld, L. (2003). Revenge: a Story of Hope (New York: Washington Square Press). Brewer, W. F. (1988). 'Memory for randomly sampled autobiographical events', in Remembering Reconsidered: Ecological and traditional approaches to the study of memory edited by U. Neisser & E. Winograd (Cambridge: Cambridge University Press). Bruner, J.. (1987). 'Life as Narrative', Social Research 54, pp. 11-32. --. (1990). Acts of Meaning (Cambridge, MA: Harvard University Press). --. (1994). 'The "remembered" self', in The remembering self. Campbell, J. (1994). Past, Space, and Self (Cambridge, MA: MIT Press). Debiec, J., LeDoux, J. and Nader, K. (2002). 'Cellular and Systems Reconsolidation in the Hippocampus' Neuron 36(3), pp. 527-538. Dennett, D. (1988). 'Why everyone is a novelist' Times Literary Supplement 16-22 September. Gazzaniga, M. (1998a). The Mind's Past (Berkeley: University of California Press). --. (1998b). 'The Neural Platonist', Journal of Consciousness Studies 5, pp. 706-717, also at http://www.imprint.co.uk/gazza_iv.htm. Heidegger, M. (1927/1962). Being and Time, translated by J. MacQuarrie & E. Robinson (Oxford: Blackwell). Hirst, W. (1994). 'The remembered self in amnesics', in The remembering self. Hubbard, E. (1909). article in Philistine. James, H. (1864-1915/1999). Henry James: a Life in Letters, edited by Philip Horne (London: Penguin). McCauley, R. N. (1988). 'Walking in our own footsteps: Autobiographical memory and reconstruction' in Remembering Reconsidered. McCrone, J. (2003). New Scientist, May 3. MacIntyre, A. (1981). After Virtue (London: Duckworth). Malamud, B. (1979). Dubin's Lives (New York: Farrar Straus & Giroux). Montaigne, M. de (1563-92/1991). The Complete Essays, translated by M. A. Screech (London: Penguin). Neisser, U. (1981). 'John Dean's memory: A case study' in Cognition 9, pp. 1-22. Pillemer, D. (1998). Momentous Events, Vivid Memories: How Unforgettable Moments Help Us Understand the Meaning of Our Lives Cambridge, MA: Harvard University Press. Plutarch, (c 100 AD/1939). 'On Tranquillity of Mind' in Plutarch, Moralia VI, translated by W. C. Helmbold (Cambridge, MA: Harvard University Press). Ross, M. (1989). 'Relation of implicit theories to the construction of personal histories' Psychological Review 96, pp. 341-357. Sacks, O. (1985). The Man Who Mistook His Wife For A Hat (London: Duckworth). Sartre, J.-P. (1938/1996). La naus?e (Paris: Gallimard). Schechtman, M. (1997). The Constitution of Selves (Ithaca: Cornell University Press). Scoville, W. B. and Milner, B. (1957). 'Loss of recent memory after bilateral hippocampal lesions', Journal of Neurology, Neurosurgery, and Psychiatry 20, pp. 11-21. Shaftesbury, Earl of (1698-1712/1900). 'Philosophical Regimen', in The Life, Unpublished Letters, and Philosophical Regimen of Anthony, Earl of Shaftesbury, edited by B. Rand (New York: Macmillan). Strawson, G. (1997). ' "The Self" ', in Models of the Self ed. S. Gallagher & J. Shear (Thorverton: Imprint Academic), pp. 1-24, also at http://www.imprint.co.uk/strawson.htm. --. (1999). 'The Self and the SESMET', in Models of the Self, pp. 483-518, also at http://www.imprint.co.uk/pdf/sesmet.pdf. --. (in preparation). Life in Time (Oxford: Oxford University Press). Swann, W. B. (1990). 'To be adored or to be known: the interplay of self-enhancement and self-verification' in Handbook of motivation and cognition: Foundations of social behavior, edited by R. M. Sorrentino, & E. T. Higgins, volume 2 (New York: Guilford). Taylor, C. (1989). Sources of the Self (Cambridge: Cambridge University Press). Wagenaar, W. (1994). 'Is memory self-serving?', in The remembering self. Wilkes, K. (1998). 'GNOTHE SEAUTON (Know Thyself )', Journal of Consciousness Studies 5, pp. 153-65 reprinted in Models of the Self. From checker at panix.com Mon Jul 18 00:07:23 2005 From: checker at panix.com (Premise Checker) Date: Sun, 17 Jul 2005 20:07:23 -0400 (EDT) Subject: [Paleopsych] Alexander Brown: If We Value Individual Responsibility, Which Policies Should We Favour? Message-ID: Alexander Brown: If We Value Individual Responsibility, Which Policies Should We Favour? Journal of Applied Philosophy, Vol. 22, No. 1, 2005 [This article, which reassesses the role of individual responsibliity, could not have been published a few years ago, as it takes away from the mandate of the state to redistibute.] abstract Individual responsibility is now very much on the political agenda. Even those who believe that its importance has been exaggerated by the political right--either because the appropriate conditions for assigning responsibility to individuals are rarely satisfied or because not enough is done to protect individuals from the more harmful consequences of their past choices and gambles--accept that individual responsibility is at least one of the values against which a society and its institutions ought to be evaluated. One might be forgiven for assuming, then, that we know exactly why individual responsibility is important. The truth is otherwise. Surprisingly little philosophical work has been undertaken to analyse and separate out the different rationales that might be in play. Several possible reasons are examined here including: utility, the social bases of self-respect, autonomy, human flourishing and fairness. However, once we adopt a pluralistic view of the value of individual responsibility we open up the possibility of value conflict, which conflict can make it harder to arrive at definitive prescriptions about which social policies best advance our concerns for individual responsibility. It is nevertheless possible to draw at least some conclusions about which policies we should favour. One important conclusion is that sometimes it is better not to hold individuals responsible for their past choices by denying them aid now, so that they might be better able to assume individual responsibility at a later date. ------------------- The idea that each person bears a special responsibility for the success or failure of his or her own life has long been a preoccupation of the political right. It is argued that individuals should save for the future, rely on their own hard work to satisfy their needs and adjust their personal ends to the shares of resources they can reasonably expect to receive over the course of their lives. A relatively recent development in normative egalitarian theory, however, has meant that these ideas are no longer the preserve of the political right. An important theme that has developed through the work of egalitarians John Rawls and Ronald Dworkin is that whilst governments ought to show equal concern for the lives of all citizens, not all inequalities are unfair. Rawls and Dworkin argue that it is right to expect citizens to assume personal responsibility for their own desires and preferences and so inequalities of happiness and preference satisfaction do not raise a case for compensation [1]. A further implication is that since it would be unfair to force hardworking taxpayers to support people who choose to be idle, individuals should be held responsible for the trade-off they make between work and leisure, which means that resulting inequalities of income should be born by the agents themselves [2]. Fairness, then, offers one reason for adopting responsibility-sensitive welfare policies. But I think it repays further investigation to consider more closely why else individual responsibility is important. Whilst it is fairly obvious that other reasons do exist, it is perhaps less well understood that these different reasons (as well as different conceptions of fairness) do not entail identical sets of responsibility-sensitive policies. One problem is that insisting on individual responsibility for reasons of fairness may jeopardise people?s future responsibility. So it can be difficult to arrive at definitive prescriptions about which welfare policies best advance our concerns. In this paper, however, I come down on the side of those responsibility-catering policies which protect and promote people?s responsibility now and in the future. In the first section I explore some of the history behind current philosophical interest in the ideal of individual responsibility, including Elizabeth Anderson?s recent polemic against normative egalitarian theory post-Dworkin. Following on from this, section II introduces five rationales for promoting greater individual responsibility. These are: utility (individual responsibility tends to promote happiness and desire satisfaction), self-respect (encouraging individuals to take greater responsibility for their own lives and livelihoods can enhance their self-respect), autonomy (expecting people to take individual responsibility for the success or failure of their own lives is an important way of showing respect for their competence as freethinking agents), human flourishing (individual responsibility is an essential part of what it means to lead a good life), and fairness (assigning responsibility to individuals for the situations in which they find themselves can in some cases be the fairest way of resolving a conflict of interests between taxpayers and welfare claimants). Focusing on the examples of drug addicts, negligent drivers injured in road traffic accidents, and people who prefer not to work, section III examines potential sources of conflict amongst these reasons. In these cases holding individuals responsible for their past mistakes can make them far less able to assume responsibility later on. Finally, section IV considers the wider implications of this value conflict and examines whether it is nevertheless possible to elicit from the ideal of individual responsibility (properly understood) something approaching a coherent and attractive social welfare strategy. I defend two main conclusions. The first is that if we deny the lexical priority of fairness, we should protect people?s future responsibility in cases of value conflict. This means that the State ought to intervene to help the victims of accidents who, as a result of their own imprudence, have temporarily lost their capacities for responsibility. The second conclusion is that ideally some taxpayers? money should be spent on giving people who are unemployed the practical support and positive encouragement they need to assume greater responsibility for their own livelihoods. In this section I also try to respond to the libertarian challenge that these policies violate taxpayers? rights and even the rights of people whom we want to assume personal responsibility. I. Background Current philosophical interest in individual responsibility owes much to the work of Ronald Dworkin. In his two seminal articles on equality (first published in Philosophy and Public Affairs in 1981) Dworkin appeared to uncover an important truth about equality, namely, that not all inequalities are unfair--some inequalities are the responsibility of the individual. Dworkin was certainly not the first contemporary philosopher to argue against flat equality. Nozick, for example, had previously developed a powerful objection in his stirring book, Anarchy, State and Utopia. For Nozick, it is a fallacy to think that the State has at its disposal at any given time a pool of unattached resources to redistribute as it sees fit. "There is no central distribution, no person or group entitled to control all resources, jointly deciding how they are to be doled out" [3]. So the State does not have the right to redistribute resources in the direction of equality. Similarly, F. A. Hayek argued that the essence of liberty is when multitudes of individuals act on the basis of their own knowledge and in pursuit of their own ends under the rule of law; this liberty is undermined when State authorities intervene to promote a particular result (equality say) [4]. But whereas Nozick and Hayek appealed to fundamental rights and individual liberty to argue against flat equality, Dworkin demonstrated how subtler conclusions could be reached simply by focusing on the ideas of fairness and individual responsibility. For Dworkin, a society of equals is one in which the distribution of resources at any given time is sensitive to people?s voluntary choices and "option luck" but insensitive to their "brute luck" [5]. This entails upholding those inequalities of income and wealth that flow from choices about whether to invest rather than consume, or to consume less expensively rather than more, or to work in more rather than less profitable ways, but intervening (even in market outcomes) to mitigate inequalities that reflect brute luck such as shortfalls in people?s native endowments [6]. In this way fairness justifies in some cases, but limits in others, the redistribution of people?s income and wealth. The fact that Dworkin incorporated responsibility-sensitivity into the debate on distributive justice may not be all that surprising given the distributive bias that has gripped academic work on justice over the past three decades [7]. But it should not be forgotten that individual responsibility has been prized by another tradition for at least two centuries. Some historical perspective is needed to understand this tradition. The Poor Laws of England were an early attempt to improve the quality of life and living conditions of the poor. But despite the laudable aims of the Laws, critics argued that the overall effect was to remove a key incentive to hard work and self-reliance. Whilst the Laws may have made some of the poor better off materially, they made them worse off morally. Perhaps the clearest example of this line of criticism can be found in Malthus? Essay on the Principles of Population. The labouring poor, to use a vulgar expression, seem always to live from hand to mouth. Their present wants employ their whole attention, and they seldom think of the future. Even when they have an opportunity of saving, they seldom exercise it; but all that they earn beyond their present necessities goes, generally speaking, to the alehouse. The poor law may, therefore, be said to diminish both the power and the will to save, among the common people, and thus to weaken one of the strongest incentives to sobriety and industry, and consequently to happiness [8]. It is perhaps a gloomy picture of human nature that says when work is not necessary to live, people tend not to work; that when having more children than one can afford to look after triggers support from others, people tend not to exercise birth control; and that when there is charity from government, there is no incentive to save for the future and no reason not to spend all day in the alehouse. But it is a picture that has endured in the work of the New Right. In one of the defining studies of welfare policy in the twentieth century, Losing Ground: American Social Policy 1950?1980, Charles Murray argued that although American social welfare policy has aimed at helping the poor--such as AFDC (Aid to Families with Dependent Families)--it has only succeeded in eroding traditional moral distinctions and deepening their dependency. In the 1950s, the reason for "getting people off welfare" was to keep them from being a drag on the good people--meaning the self-sufficient people-- and to rescue them from a degrading status. It was not necessary to explain why it was better to be self-sufficient; it was a precondition for being a member of society in good standing. In the late 1960s, with the attack on middle- class norms and the rise of the welfare rights movement, this was no longer good enough. Self-sufficiency was no longer taken to be an intrinsic obligation of healthy adults [9]. The arguments of the New Right greatly influenced the politics of Margaret Thatcher and Ronald Reagan and it is perhaps a reflection of the electoral success of conservative politics during this period that nowadays leaders of all political parties, not just those on the conservative right, use the language of individual responsibility to place themselves in a tradition of thought that extols the virtues of hard work, delayed gratification, thrift, and self-reliance. In the words of Tony Blair: Our vision is of . . . [a] society where more opportunities, and more choices, are matched by a greater responsibility on the part of individuals to help themselves [10]. Work is just one area of social welfare planning where the ethics of individual responsibility has played a substantial role. The distribution and funding of medical care is another example. It has been argued that people should assume responsibility for their own health and so cannot expect the rest of society to meet those medical needs that result from their taking unnecessary risks with their health. The New Right?s criticism of the National Health Service in the United Kingdom, and of Medicaid and Medicare in the United States, is that these schemes fail to recognise individual responsibility and in some cases undermine it. The criticism is that when the State assumes responsibility for people?s health care it removes the important link between conduct and consequences and so effectively removes personal responsibility for ill health. The upshot is that people lack the incentive to make prudent self-regarding choices about their own health and may over time lose the capacity to do so [11]. It is difficult to analyse this rich tradition of thought about the importance of individual responsibility and boil it down to just a few policy prescriptions, but among the proposals that have been put forward are: ability testing for those seeking unemployment benefits to identify those who are genuinely unable to work as opposed to just malingering; work opportunities for the poor rather than handouts; social provision of basic health care services but not for specialist treatment programmes for illnesses that are self- inflicted; the transference of more and more medical costs, even for basic health care services, away from social insurance schemes to the patients themselves; the setting up of independent hospitals that manage their own budgets and have the right not to treat those who are uninsured and cannot afford to pay for treatment. Each of these policies upholds the following basic principle: people ought to assume responsibility for their own welfare as individuals and people who fail to do so are not entitled to the same level of assistance as people who are unable to do so or who try to do so and then fail. My aim in the next section of the paper is to investigate the appeal of this principle and to explore more fully why individual responsibility matters. My point so far is not that fairness provides an insufficient justification for responsibility-sensitivity but that fairness is not the only value in play. The reason I say this is that understanding the attraction of individual responsibility may require answers to a range of questions. What do we owe to each other? How should we live? What kind of society do we want to live in? And answering these questions may in turn require an appeal to various responsibility-supporting values. There is a further complication. Given the appeal of individual responsibility, it might seem obvious that holding people responsible for the success or failure of their own lives is morally desirable in all cases. Not so. In recent years there have been some important philosophical objections to responsibility-sensitivity in social welfare policy. Perhaps the most striking objections appear in Elizabeth Anderson?s article, What is the Point of Equality?. In this article Anderson argues (amongst other things) that strict adherence to the idea of choice and responsibility entails treating individuals in ways we have other egalitarian reasons not to want to treat them: that hospitals may with fairness deny emergency medical treatment to uninsured drivers injured as a result of their own negligent actions; that it is acceptable to exclude certain sections of the disabled community from public places if they are responsible for their own disabilities; that government agencies should withhold disaster relief from farmers who knowingly set up production in hazardous geographical areas; that society has a right not to compensate police officers, fire fighters, and so on, for any injuries they might suffer as a result of carrying out their dangerous duties; that we should accept the present system of market-based rewards as the unintended result of people?s free choices, even though the present system fails to recognise the work undertaken by female carers in the home, leaving such women vulnerable to exploitation, violence, and domination at the hands of men [12]. For Anderson, the more fundamental egalitarian task is to protect people from oppressive social relationships and enable them to live as human beings and equal citizens [13]. Whilst Anderson?s examples raise some important questions about responsibility- sensitivity, luck egalitarians can make the following fairly obvious reply to Anderson. Even if some individuals are responsible for the situations in which they find themselves, it does not follow from this that distributive arrangements ought to be based solely on the principle that voluntary disadvantages deserve no compensation. The mere fact that some people face social oppression or lack access to the capabilities necessary to function as human beings and equal citizens may provide sufficient reason not to enforce consequential responsibility. Nonetheless, in so far as insisting on individual responsibility can have undesirable implications from an egalitarian or any other point of view, I think it behoves us to be absolutely clear about why individual responsibility matters. Only then can we judge whether or not these implications are out of proportion with our aims in pursuing policies that are responsibility-sensitive. Putting the same point another way, the claim that personal responsibility is not the only thing that matters and should be curtailed in respect of other, more traditional egalitarian concerns (social exclusion, poverty, deprivation, oppression, and so on) is plausible, but to make good this claim we must first consider the prior question of why individual responsibility is important and what precise implications it has for public policy. II. Why is Individual Responsibility Important? Support for policies which promote individual responsibility can take many forms. One suggestion is that we care about individual responsibility to the extent that it has beneficial consequences for both individuals and society as a whole. But this suggestion immediately raises two further questions. Firstly, what consequences do we care about? And secondly, what should society do in respect of these consequences? Utilitarianism provides some well-known answers to both questions. Utility In relation to the first question, classical utilitarianism will emphasise pleasure and the absence of pain. In answer to the second question, it will favour the greatest amount of pleasure all told. To see how this theory is relevant to debates surrounding individual responsibility and social welfare, consider the following question: should society, through the tax system, redistribute money from the rich to the poor? At first glance it seems that utilitarianism supports a policy of redistribution regardless of people?s responsibility for being rich or poor. This is because money has diminishing marginal utility: the more money people have, the less pleasure they derive from additional amounts [14]. However, a more sophisticated utilitarian analysis of this question will also take account of the utility-enhancing consequences of responsibility-sensitivity. What are these consequences? For one thing, taxing the financial rewards given to hard-working high-flyers--and in that sense ignoring any responsibility they might have for their earnings--may reduce the satisfaction they derive from their work. It may also reduce the incentive to work hard and thereby lower efficiency, innovation and economic growth, which may in turn diminish overall utility. Requiring the poor to work for their income may also increase utility. This might happen in a number of ways such as: through the greater prosperity of those who take up work, indirectly from an increase in the labour supply and reductions in social exclusion and crime and, more directly, as a result of the satisfaction people derive from earning their own keep [15]. These are just some of the ways that responsibility-sensitivity may enhance utility. So in the end utilitarianism may support the policy of holding people responsible for earning their own income and, therefore, of not taxing the rich for the sake of the poor. The Social Bases of Self-Respect Another noteworthy consequence is self-respect. This argument for individual responsibility has to do with an alleged connection between individuals? taking control of their own lives and livelihoods and an increase in their self-respect. In order to understand this argument it is necessary first to know something about self-respect. Self-respect is a notoriously difficult concept to pin down, but one thing we can say with at least some degree of certainty is that it is not something we can simply give to people like other ordinary material resources. Self-respect is a feeling, sense, or impression one has of oneself. It has a psychological standing. It also has a normative element. To see this, consider Stephen Darwall?s notion of "evaluative self-respect" [16]. According to Darwall, this kind of self-respect is predicated on our appraisal of ourselves as people. To lack self-respect, in this sense, is to measure ourselves against the sort of person we desire to be but then to form the impression that we come up short of those standards. Although the object of evaluative self-respect is our own character, conduct, power and so on, other people also help to shape the perception we have. Not only do our interactions with others partly define the standards of merit by which we come to measure ourselves, but they also offer a constant test of whether or not we have lived up to those standards. This last point helps us to see how self-respect can be linked to individual responsibility. One of the ways in which people often develop self-respect is through an awareness of their ability to take responsibility for their own lives. To succeed in securing one?s own long-term health and safety; to have the self-mastery to develop and successfully pursue a realistic set of goals and ambitions; to increase one?s skills and work for a living--these are all things that can enhance one?s evaluation of oneself as a person with merit. The basic point is that different facets of individual responsibility can be foci for seeing ourselves in a favourable light. But how seriously should we take self-respect as a reason to promote personal responsibility? The idea that society should, through the social and economic instrumentality of the State, promote self-respect is a conspicuous feature of current thinking about social justice. This is due in large measure to the influence of John Rawls. Even before the Commission on Social Justice in the United Kingdom concluded that people have a right of self-respect [17], Rawls claimed that the social bases of self-respect are perhaps the most important primary goods over which society has responsibility [18]. Rawls placed particular emphasis on the link between work and self-respect [19]. If jobs are not readily available in the private sector, then perhaps jobs should be created in local government--so great is the importance of self-respect [20]. There is, of course, no suggestion that encouraging greater individual responsibility through work is the only way to promote self-respect. Other writers, for example, have emphasised the role of co-operative ventures in promulgating self-respect, where groups of individuals take mutual responsibility for their welfare [21]. But my central business in this section is merely to try to motivate the claim that assisting people to work for a living can be a way of enhancing self-respect, which is one of a number of reasons to favour individual-responsibility-promoting policies. So there are, it seems, potentially a number of different benefits associated with individual responsibility. Even so, would we still support responsibility-catering policies if they failed to have these consequences--if, for example, individual responsibility did not enhance self-respect? The short answer is yes. Perhaps a life of individual responsibility has intrinsic value, and this is what I want to explore now. Autonomy One reason for thinking that individual responsibility has intrinsic value is that it characteristically involves the exercise of autonomy. To see how individual responsibility might involve the exercise of autonomy, consider Dworkin?s example of Louis, who develops expensive tastes for plovers? eggs and pre-phylloxera clarets as a result of reading magazine articles about lifestyles of the rich and famous. Should he receive additional public funds to help him pay for his new tastes? One reason for thinking that public funds should not be made available is that people cannot fairly expect the rest of society to foot the bill for the preferences they have freely developed [22]. Rawls, for example, believes that under a fair division of social responsibility, citizens would be expected to adjust their personal ends to reflect their reasonable expectations of income and wealth. Lack of preference satisfaction as such would not raise a case for further redistribution [23]. This line of reasoning may or may not be sound-- Cohen and Anderson have suggested that the proposed assignment of responsibility under ?equality of resources? is unsound on the grounds that unchosen preferences do raise a legitimate case for compensation [24]--but the point I wish to emphasise here is that fairness is not the only justification for assigning responsibility to individuals in this way. Another argument has to do with the value we place on the exercise of autonomy. Precisely this argument can be found in Bruce Landesman?s article, Egalitarianism. In this article Landesman considers Rawls? suggestion that in a just society it is the responsibility of the individual to adjust his or her personal ends over time. Landesman claims that not only is this outcome just, but also ?morally sound?. He points out that part of this process of adjustment is taking control of personal ends and altering them to fit changing circumstances. And, according to Landesman, "it is a part of a person?s good to do this, an aspect of his autonomy and self-determination" [25]. Thus, if learning to revise our personal ends to better suit our situations (or else delay gratification) is bound up with the exercise of our autonomy, which is, in turn, a good thing, then allowing individuals to rely on taxpayers to satisfy their expensive tastes would be, as Landesman puts it, "morally inappropriate" [26]. The argument from autonomy is not exhausted by the example of well-being. In the case of income and employment, conservatives insist that dependency on the State is as bad for dependants as it is for taxpayers. It is argued that the lot of the poor should be left to the reflections of those individuals who find themselves in poverty, and not decided by welfare officials. This raises a question about the legitimacy of the modern welfare state, which (to varying degrees) allows people to rely on others to look after them. It has been suggested that when people are dependent on welfare handouts, it is the State and not they who exercise autonomy over what happens to them [27]. In fact, paternalism has been a major source of criticism against social welfare policy dating as far back as the Poor Laws of England [28]. It can seem important, then, for the State to place responsibility with citizens for what happens to them because such treatment shows respect for autonomy. This respect has to do with adopting policies that treat people as though they are competent to be left in charge of their own lives. No doubt this involves a variety of different things, but responsibility-sensitivity is certainly one way of showing respect for autonomy. Human Flourishing A different reason for valuing individual responsibility takes as its starting point the following question: what is a good life? What is particularly disturbing to some people about public dependency is not so much that it is unfair to those on whom individuals are dependent, but what it says about, and does to, the character of the person who is dependent. This argument is less concerned with the assessment of actions in terms of whether or not unemployment is a voluntary choice, and more with the character of the agent who allows himself to be dependent on others. The Victorians, for example, condemned the Poor Laws for fostering a range of vices including intemperance and over population, but at the heart of the criticism was a concern for a breakdown in the virtue of self-reliance. One develops the virtue of self-reliance by acting as a self-reliant person does and the Victorians believed that the Poor Laws prevented individuals from acting in this way. Why is it good for a person to be self-reliant? There are many instances in ordinary moral discourse where ?good? is identified with what comes naturally and this appears to be one of them. According to this interpretation, self-reliance is a natural disposition that can be subverted by human intervention such as social welfare provision. For example, in 1979 the Conservative Party in the United Kingdom argued for the reform of the Welfare State on the basis that humans naturally tend towards self-reliance. The following passage is taken from its election manifesto: We want to work with the grain of human nature, helping people to help themselves--and others. This is the way to restore that self-reliance and self-confidence which are the basis of personal responsibility and national success [29]. The suggestion, then, is that human beings characteristically and essentially rely on their own powers to meet their needs--they forage, hunt, produce, or else trade their talents in order to obtain the resources they need to survive; they decide how, and when, to do these things and what degree of effort will be required and for how long. So to be dependent on others even though one is capable of looking after oneself, is to confound one?s own nature. Dependency in this sense is an alienating or dehumanising experience, something antithetical to human flourishing. We have seen that individual responsibility can be desirable both for its beneficial consequences and for the intrinsic value such a life may contain. But what other reasons might we have for caring about individual responsibility? As I have already mentioned, some egalitarians stress a connection between individual responsibility and fairness. Fairness To see this connection, consider Dworkin?s example of the tennis player and the market-gardener (a re-telling of Aesop?s classical fable of the Grasshopper and the Ants). Suppose there are two equally talented people who share the same social background and have an opportunity to privately own and use equally valuable sets of ordinary material resources. One wants to be a market gardener and selects land and raw materials that will allow him to produce as much of what others want as possible. The other person wants a similar amount of land and raw materials, but for use as a tennis court. Assuming the market gardener is more successful at trading with other members of the community, he will soon have a greater share of resources than the tennis player. What should we do? Should we allow these inequalities to develop or should we force the market gardener to hand over some of his resources to the tennis player in order to maintain the status quo? Dworkin?s view is that people such as the tennis player should be held individually responsible for the lives they have chosen to lead. Equality requires that those who choose more expensive ways to live--which includes choosing less productive occupations measured by what others want--have less residual income in consequence [30]. Some egalitarians, then, uphold individual responsibility on grounds of fairness. However, I should make it clear at this stage that there is as yet no settled view about the exact link between fairness and individual responsibility. Even within the egalitarian literature there are different ways of defining fairness and so the link between fairness and individual responsibility needs to be qualified in two ways. The first point is that there can be more than one rationale of fairness for enforcing individual responsibility. Whereas some egalitarians (such as Cohen) believe that the fundamental egalitarian impulse is both to eliminate the influence of brute luck on distribution and to hold individuals responsible for their voluntary choices, other writers (such as Rawls) light upon the idea of a reasonable division of responsibility, which division takes account of the decisions of moral agents placed under conditions of equality. The first rationale says that those who are lazy (say) should be held responsible for their lack of income in so far as this is a voluntary choice. The second rationale says that that the lazy should be held responsible for their lack of income in so far as this constitutes a reasonable division of responsibility. Partly as a result of this, a second qualification is that egalitarians also disagree about when it is right to enforce consequential responsibility on individuals. Consider again the issue of unemployment. According to Rawls, a reasonable division of responsibility is one in which people who prefer to surf all day rather than work for a living are entitled to fewer resources, which share reflects the extra leisure time they have at their disposal [31]. Dworkin also accepts the work requirement. He argues that within a hypothetical insurance market that caters to the risk of being insufficiently talented to earn at different levels of income, the most popular insurance policies would be those that require policyholders to prove (as a condition of receiving any payments) that the reason why their income falls below the insured level is that they have insufficient talent and not simply that they choose to be under-employed [32]. Much the same insistence on work can be found in Anderson?s article. Whilst she claims that everybody is entitled to access to the capabilities necessary to function as human beings and equal citizens, Anderson also insists that in the vast majority of cases individuals do not have a right to unearned income, but should gain access to income via some type of paid employment [33]. Nevertheless, Anderson draws a further distinction between idle surfers (say) and non-wage-earning carers, who are entitled to recognition for their useful contribution to society [34]. In a similar vein Richard Arneson has argued that principles of egalitarian justice should be sensitive to deservingness [35]. On this reading of fairness, we should treat differently the person who prefers to surf all day rather than work for a living and the daughter who selflessly devotes herself to the care of an infirm sibling or elderly parent. Some egalitarians, however, argue that every citizen has a right to unearned income whatever his or her daily activities. Andrew Levine, for example, insists that governments have a responsibility to support even surfers--as a requirement of liberal neutrality [36]. Philippe Van Parijs defends a similar right to unconditional basic income on the grounds that we value real freedom for all [37]. And Timothy Hinton supports a universal right to basic income on the ground that everybody has a claim to an equal share of the world?s resources, even those who do not work [38]. There are, then, different ways of interpreting fairness. Notwithstanding the above complications, in this section I have introduced five main reasons for thinking that individual responsibility is important: utility, the social bases of self-respect, autonomy, human flourishing and fairness. I do not claim this list to be exhaustive. No doubt other significant reasons have been left out. But I hope I have at least been able to motivate the claim that fairness is not the sole justification for pursuing responsibility-sensitive social welfare policies. To illustrate this pluralism of values consider the issue of employment. One reason for preferring individuals to earn their own means of subsistence is if this maximises overall utility. A second reason is if it enhances people?s self-respect. A third reason is that by expecting individuals to be self-reliant we show respect for their autonomy. A fourth reason is that earning one?s own keep is part of a flourishing human life. A final reason is that forcing taxpayers to support the voluntarily unemployed is, under certain conceptions of fairness, exploitative or unfair. How are these reasons distinct? Whilst fairness condemns the able-bodied freeloader who lives off the hard-earned income of others, utility and self-respect do not condemn the welfare scrounger as such. According to these values, there is nothing intrinsically wrong with a life of idle dependency--only when it detracts from utility and self-respect is this lack of individual responsibility bad. The differences do not end here. Human flourishing will support policies designed to help and encourage individuals back into the work-place, even if this is more expensive than simply withdrawing benefits. Fairness, on the other hand, is limited to the claim that the idle have no right to financial support. Finally, respect for autonomy implies that although it would be a good thing for individuals to take control of their lives by working for a living, the State ought to respect people?s voluntary choices. So if an adult wants to lead a life of idle leisure and fully appreciates what he or she might lose by not having a job, respect for autonomy implies two things: firstly, that he or she should have less residual income as a consequence; and secondly, that no undue influence should be heaped on his or her decision not to work. III. Value Conflict Perhaps if policy makers were in a position to take on board and respond to these different reasons separately, each reason would present no particular difficulty. Taken together, however, it might prove difficult to construct a coherent approach. Once we introduce different values we also introduce the possibility of value conflict. It is conceivable, for example, that insisting on fairness and respect for autonomy will place limits on the extent to which the State can advance the goals of human flourishing and self-respect. Consider two examples. Fairness, Autonomy and Human Flourishing Imagine that someone has fallen into a lifestyle of heavy drug use, among the effects of which are that he can no longer afford to satisfy his cravings or control them. Let us suppose that he has caused the situation in which he now finds himself. He may not have chosen to ruin his life, but let us assume for the sake of this argument that his negligence in becoming hooked on this way of life alone warrants this judgement. What should we do? Fairness would appear to suggest that since he is responsible for the mess in which he finds himself, it would be unfair to force taxpayers to pay for his mistakes. But suppose we believe that taking control of one?s own personal ends is an essential part of what it means to lead a good life. Taking this into account, we may decide instead not to abandon the addict to his stupor; we may decide that he should have access to specialist care and therapeutic treatment for his addiction. The basic thought is that if we wish to see him recover his capacity for responsible agency, in the sense of conquering his addiction and taking responsibility for his desires and desire satisfaction, at least some taxpayers? money should be spent on the relevant assistance to get him back on track. It looks as though there is a conflict of values. On the one hand, we may want him to recover his self-control. But on the other hand, we may also want to uphold the demands of fairness and this implies that we should not force taxpayers to pay for his treatment. There are plenty of more familiar examples of this sort. Think of Anderson?s case of the uninsured, negligent driver lying injured at the side of the road [39]. Do we seriously believe the ambulance should leave the man where he is once it has been ascertained that he was responsible for the accident and has no insurance? Surely not. Imposing consequential responsibility on him now may result in his being unable to act more responsibly in the future. And yet we do not want to ignore fairness. This is a responsibility-responsibility trade-off. If we assist everybody, we protect people?s future responsibility and the values of autonomy and human flourishing embodied therein, but we also unfairly ignore the fact that some people are responsible for the situations in which they find themselves. But if we do not offer assistance, we uphold fairness potentially at the cost of not protecting his future responsibility. Now some may suggest the following solution to these difficulties. The State could make it a legal requirement that people who run certain types of risk buy a minimum amount of insurance from approved private insurers. (Obviously in the case of illegal activities such as drug taking the State would first need to legalise the activity and find a way to register users.) The insurers then pay out if people find themselves in the position of having to give up work and go into drug rehab or undergo treatment for injuries suffered as a result of car accidents for which they were responsible. Whilst this may limit people?s autonomy in one sense, it ensures that people have access to the care they may need at a later date, which in turn protects their future autonomy. In reply to the potential criticism that this is paternalistic, it may be pointed out that people often accept rules that require them to do things they know they should do precisely because they realise--in their more reflective moments--that left to their own devices they may fail to do these things through carelessness or weakness of will. There is nothing disrespectful about this paternalism, because it appeals to people?s own self-awareness. On the surface, then, there does not appear to be any great difficulty in accommodating fairness as well as concerns about people?s future responsibility. Or is there? One problem with compulsory insurance is that it may be difficult to collect premiums from individuals on a fair basis. Ideally, people ought to buy insurance for each of the risks they run and pay different amounts for their insurance depending on the degree of risk they expose themselves to. Nevertheless, in practice some people may have insufficient money to buy insurance at the premiums fixed by the insurance companies and this may be due to their own choices about work, leisure and consumption rather than any unfairness in the distribution of income opportunities. This means that people who are unable to buy insurance but who nevertheless are minded to run the risk and flout insurance rules will be left uninsured, subject to legal sanction and without access to treatment if misfortune strikes. In order to accommodate this problem, insurers could allow such people to buy the required insurance coverage at discount prices and raise the premiums of people who buy higher coverage to cover any slack in the premiums paid by the poor. In this event, however, insurance rules become a form of redistribution of insurance costs from the prudent to the imprudent. This result may protect people?s future responsibility but it ignores fairness. Perhaps then the State should take over the provision of insurance across a range of dangerous activities and lifestyles. How might this make things fairer? The answer is by requiring people to mutually insure themselves against different sorts of risks. For example, forcing non-drug-takers to contribute to an insurance scheme for drug rehab may be fair to the extent that it represents a quid pro quo for forcing drug-takers to contribute to an insurance scheme for risks they do not take. But the obvious difficulty with this suggestion is if it is the same people taking all the risks, thereby placing an unfair share of the burden on the generally more prudent. Clearly the foregoing points do not constitute a knock-down objection to rules requiring people to buy insurance. Perhaps in a society where there is an ethos of personal responsibility everyone will put aside a fair amount of money for insurance or else will refrain from engaging in activities for which they cannot afford to buy insurance. Alternatively, taking people as they are, perhaps we should not insist that individuals pay equal amounts for their insurance after all. It may be that the capacities required for making prudent choices about risk and insurance (and for avoiding bad outcomes whilst engaged in risky activities) are a function of genetic endowment and upbringing rather than personal responsibility--in which case, a degree of redistribution in insurance costs is not an unfair result [40]. I readily accept both of these arguments. Instead, my point is that if some people do fail to set aside enough money to buy their own insurance but insist on taking the risks anyway and if this tendency is their own responsibility, then rules requiring people to buy insurance cannot solve the problem of how to protect people?s future responsibility whilst at the same time upholding fairness. What about asking people to pay society back once they have recovered? This may be possible in some cases, but we must also bear in mind the possibility that some people might never be able to repay the debt. Some addicts, for example, might be able take responsibility for their ends in the future and lead a normal life but fail to earn enough to repay the cost of their treatment. Others might be so damaged by their past experiences that they are unable to work to support themselves financially. To take another example, an uninsured driver might, with physiotherapy and special equipment, be able to work again and purchase insurance against future accidents but could be left unable to pay the cost of this assistance. The point is that sometimes a person might be able to regain his capacity for responsible agency but is unable to repay society the money it pays out to achieve this end. I believe that reflecting on the foregoing practical considerations ought to make us question why we value individual responsibility and whether fairness is the most important value. If we also value individual responsibility because we think it is a good way of life for people to lead, then sometimes it might be desirable to waive personal responsibility now, so as to leave people more able to assume responsibility later on. Looking at the role of perspectives in social welfare policy can also help to make clearer the tensions between different responsibility-supporting values. Two perspectives seem especially relevant. From a backward looking perspective we focus on the causes of misfortune, and in some cases we note that a person has only himself to blame. The question defining this perspective can be stated as follows: why should society help individuals who are responsible for their own downfalls? As the Ants put it: "What did you do this past summer?" "Oh," said the grasshopper, "I kept myself busy by singing all day long and all night, too." "Well then," remarked the ants, as they laughed and shut their storehouse, "since you kept yourself busy by singing all summer, you can do the same by dancing all winter" [41]. In contrast to this, from a forward looking perspective we consider what can be done to change people?s behaviour for the better, and in some cases we realise that a person can only be helped and encouraged to behave more responsibly in the future if we do not withdraw aid. The point is that if the Ants do not agree to help the Grasshopper, then he will probably starve to death and the Ants will be unable to impress upon him the importance of working hard during the summer. The question defining this alternative perspective can be expressed in the following terms: what, if anything, can we do to foster a greater sense of individual responsibility among individuals who, in the past, have failed to take responsibility? [42] Trying to give equal weight to both perspectives may highlight tensions within our system of values. Whereas the backward looking perspective is often motivated by considerations of fairness alone, the forward looking perspective involves a much wider set of ethical concerns, such as the desire to promote human flourishing. At the level of public policy the dilemma is this: should we, on the one hand, enact a policy of offering treatment only to those drug addicts, negligent drivers, and so on, who are capable of paying for it either at the point of delivery or at a later stage, or should we, on the other hand, enact a policy of assisting all drug addicts, negligent drivers, and so on, regardless of their ability to pay now or later? Fairness and Self-Respect My case here is not limited to examples of people who suffer a temporary loss in their capacity for responsible agency. The blind pursuit of fairness can also detract from strategies designed to enhance self-respect. Take employment as an example. Some egalitarians have developed a conception of responsibility-sensitivity which, in the case of unemployment, implies that people who are able to work and support themselves do not have a right to taxpayers? money. Under this proposal, the main task of welfare officials in dealing with those who are unemployed would be to judge whether or not a claimant has a valid excuse for being out of work. This certainly is one way of catering to individual responsibility, but there are drawbacks with this type of system. There is little or no evidence to suggest that this strategy can increase levels of work among the most recalcitrant work-shy. On the contrary, studies in the United States and Britain have shown that denying aid to people who elect not to work is not sufficient for a change in their pattern of behaviour and underlying work ethic. "Merely to deny aid does not tell people what they should be doing instead of being dependent. It is not prescriptive enough" [43]. Why should we care? There are a number of reasons, but one notable concern is that this may constitute a missed opportunity for promoting the benefits of work and with it increased self-respect. By maintaining aid there is always a chance that a person may, at a later stage, agree to learn new skills and engage in job-searching behaviour. The result may be a new job and an increase in self-respect. Now I do not claim that maintaining aid is necessary for promoting work in every case. Clearly there will be people for whom the threat of losing benefits will be a powerful incentive to find work, and once in work they will have self-respect because they no longer rely on the State. Nor do I claim that paid employment is necessary for self-respect. There will be people whose self-respect is not greatly influenced by lack of earned income one way or the other: they do not need to earn a wage in the market place or be encouraged to look for work to have a sense of their own value. Indeed, some people may derive self-respect from engaging in leisure activities that they might not be able to engage in if they worked for a living [44]. I also do not deny that promoting individual responsibility may have adverse consequences for some people: there is the problem that in trying to encourage people to work we risk humiliating, or setting up for a fall, people who are in fact incapable of work [45]. Nevertheless, from the fact that cutting aid does not always inhibit efforts to promote work as one social basis of self-respect, it does not follow that we do not have to take seriously the possibility that it might do so for some people. Without the benefit of hindsight, we cannot know how someone would have responded had the rules dealing with eligibility been drawn differently. It all depends on the individual. Furthermore, even if we accept the fact that there are other social bases of self- respect besides paid employment, this does not mean we should ignore this basis. If we are truly committed to promoting self-respect and we have reason to suspect that lack of work and lack of support for work can detract from the social bases of self-respect in an important way, then perhaps we should suspend our obsession with fairness for a moment--an obsession that tends to focus all our efforts on weeding out unworthy or bogus claimants--and spend at least some taxpayers? money on initiatives that are aimed at directly challenging the behaviour and work ethic of those who are habitually unemployed or under-employed. In this section I have explored potential sources of conflict between different responsibility-supporting values. However, I should emphasise before we go any further that my argument is limited in scope. I claim only that conflict can arise given some conceptions of fairness and under some circumstances. I do not say that value conflict will arise no matter how fairness is defined and no matter what the circumstances might be. By way of illustration, so far I have focused on a reading of fairness that says people who "get themselves into trouble" are not entitled to public assistance. On this reading of fairness, since the drug addict negligently allowed himself to fall into this lifestyle and the jobless man is voluntarily unemployed, they do not have a right to redistribution. Yet, as mentioned above, there are other conceptions of fairness. Some egalitarians believe that everyone is entitled to real freedom or an equal share of the world?s resources and, therefore, that even the drug addict and the jobless man have a right to income from the State. Perhaps the value conflict will disappear if the world is organised around this conception of fairness. How so? If someone with a recreational drug habit has a steady stream of income he will either be able to pay for insurance against addiction or buy the drugs he needs should he become an addict. The second option may not be everyone?s idea of a fully flourishing life, but at least he is taking responsibility for his own ends, in one sense, since he is paying for what he wants with the share of resources he can fairly expect to receive. It may be possible to construct a similar argument in the case of the jobless man. With a steady stream of income he can fend for himself and, though he does not work for a living, this public recognition by others may provide an alternative source of self-respect. The key point, then, is that forms of responsibility-sensitivity that might be appropriate under some conceptions of fairness might not be appropriate when fairness is defined differently. This means that there are fairness rationales that do and there are fairness rationales that do not support the claims of the drug addict and the unemployed man to unconditional income. A further implication is that if there is value-conflict, it is contingent on how fairness is fleshed out. IV. Which Policies Should We Favour? It is tempting to say that we value individual responsibility for each of the reasons that have been considered here. If this is true, then selecting a suitable regime of individual responsibility may not be easy. Yet I believe the foregoing arguments do have at least some practical implications for social welfare policy, which implications I now want to try to bring out. Initially, one implication is that if we value individual responsibility not simply for reasons of fairness but also for autonomy and human flourishing (say), in some cases we have reason not to hold people individually responsible for the adverse consequences of their past choices by denying aid, so that they might be better able to assume responsibility or lead a more responsible life later on. What does this mean? For one thing it means that the State ought to protect the capacity for responsible agency of all citizens. This implies that drug addicts should have access to specialist treatment regardless of their ability to pay and that negligent drivers should be taken to hospital and cared for even if they are uninsured [46]. However, this assumes that of the collection of values discussed in this article (utility, the social bases of self-respect, autonomy, human flourishing and fairness) fairness is not lexically prior and so can be trumped in cases of conflict. Nonetheless, some may insist that fairness is the dominant value and, therefore, should take priority. On this view, if upholding fairness means enforcing consequential responsibility on individuals at certain times by denying them aid, then we should do so whatever the consequences. Admittedly, if this is the view one takes of fairness and its priority amongst other values, this is the conclusion one should draw. But I should make it clear at this stage that this is not the only view on offer. Some may believe that it is more important to help people recover their capacity for responsible agency because of utility or self- respect or autonomy or human flourishing than it is to insist that people bear the consequences of their past choices for reasons of fairness. I do not intend to try and settle this argument here. Instead, I merely suggest that accepting other responsibility- supporting values besides fairness opens up certain dilemmas about social policy and that resolving these dilemmas by asserting the lexical priority of fairness is far from self-evidently correct. There is, however, a further wrinkle. Libertarians may claim that it is morally insupportable for the State to intervene to protect responsibility in the ways specified above. How so? Libertarians argue from the value of autonomy to very strong rights of non-interference to life, liberty and property. The basic thought is that since it is valuable for individuals to select a way of life that best suits them and for them to act in accordance with this self-determination, it is morally fitting that they should have a protected sphere of property and action. So even if we agree both that individual responsibility can lead to self-respect and can itself embody autonomy and human flourishing, which are perceived as good things, and that moral agents ought to pursue individual responsibility in their own lives and ought to help others to pursue it in theirs, it does not follow from this that the State ought to enforce this as a collective activity. On the contrary, libertarians may insist that people have rights to property and the State violates those rights by forcing them to contribute to responsibility- protection. On this view, getting people off drugs (say) is rightly a matter for private individuals and charities to pursue, perhaps with some special responsibility falling on family members. This is one example of a standard libertarian argument against welfare-supporting intervention by the State. It rests on the claim that individual citizens have an inviolable right to live in society and retain their income and wealth without being forced to take a part in the welfare of others. However, accepting the value of autonomy does not entail libertarianism. An opposing view is that if part of the justification for very strong rights to non-interference is that we value autonomy, then it is morally appropriate to leave a proportion of people?s income and wealth outside of the protected sphere in order to fund autonomy-protecting policies. Otherwise libertarianism is bound to lead to some people?s misadventures going untreated, which, in turn, will have an adverse effect on their future autonomy. On this view, it is acceptable to impose taxes on people who are capable of exercising individual responsibility to ensure the same capacity for all. None of this is intended to demonstrate that autonomy cannot be used to justify at least some very strong rights to non-interference. Rather, the point is that using autonomy to justify the very strong rights of taxpayers to reap all of the rewards of their labours is questionable in situations where the result is detrimental to autonomy. A second major implication of my investigation is that ideally some taxpayers? money should be spent on giving people who are unemployed the practical support and positive encouragement they need to assume greater responsibility for their own livelihoods. I do not claim that valuing this type of individual responsibility logically entails government action to promote it, but I do think it offers a practical reason to act. This reason takes the following form: if we value A, and doing X, Y and Z will help to foster A, then we have a prima facie reason to do X, Y and Z. This can be done in different ways such as: by giving the idle access to income from the State in so far as this enables officials to exert influence over their attitudes and behaviour and encourage them to assume responsibility in the future; free or subsidised education and training; free or subsidised travel and child care; tax exemptions; perhaps a limited role for society as an employer of last resort. However, at this stage in the argument I may face the following objection. Is not individual responsibility about being proactive, acting on one?s own initiative and relying solely on one?s own resources to make both ends meet? If this is true, then surely it is muddled thinking to believe both in the value of individual responsibility and that government should intervene to encourage individuals to assume responsibility. This is an interesting objection but I do think it has an answer. Perhaps in an ideal world individuals would assume responsibility for their own lives spontaneously. But it does not follow from this that lesser degrees of individual responsibility are worthless. We may value acts of individual responsibility that flow from individual initiative above acts of individual responsibility that result from government intervention. Even so, holding this view about the relative value of acts of individual responsibility is perfectly consistent with the claim that all acts of individual responsibility can be valuable. The discussion to this juncture has brought us to the suggestion that because we value individual responsibility government should pursue efficient means to foster it. But another issue must also be addressed. Do the ends justify the means? To answer this question we must surely start with an assessment of how much we value individual responsibility. According to this justification, it must be shown that we place sufficiently high value upon individual responsibility to warrant the costs involved and that there is no alternative policy that would yield the same result at lesser cost. This then requires a response to the further question: how can it be shown that the society in question values individual responsibility? No doubt there are different ways of showing that we value individual responsibility, but presumably the clearest expression is through the decision of voters to accept the policy. In addition to this we must bear in mind the libertarian challenge that people have rights and there are things no government may do to them or make them do without violating their rights. In respect of responsibility-protection, I have argued that the State is justified in levying taxes on people in order to protect the capacity for responsible agency of everyone, including those who get themselves into trouble. But the present case is different. The question is this: what right does the State have to force taxpayers to fund responsibility-promotion? I accept that the libertarian challenge is more difficult to accommodate in this case. There are nevertheless two possible counters to this challenge. One counter is that, although people should be able to exercise their autonomy and decide what ends they want to promote, in a democracy people agree by means of voting to abide by the decision of the majority. So if the majority of voters are in favour of responsibility- supporting policies, the minority are bound by this decision and their rights are not violated. However, even if the argument from democracy is denied by libertarians, a second counter is that although individuals have rights of non-interference to life, liberty and property, they also have rights to other things including the social bases of self-respect. The basis of this counter is that because self-respect plays such a crucial part in making life worth living, the State has a responsibility to secure the social bases of self-respect. Thus, in so far as work is one of the most important social bases of self-respect, the State ought to secure the means to foster it [47]. A further desideratum is that the proposed means for promoting individual responsibility should respect the autonomy of those targeted. As I have already noted, the political right has often criticised welfare-supporting intervention by the State on the grounds that it is paternalistic. It is conceivable that the same charge will be levelled against State intervention aimed at fostering acts of individual responsibility. But, in reply to this concern, it is a further implication of my investigation that even if most people within a society endorse a conception of the good that extols hard work, delayed gratification, self-reliance, and so on, the State is not entitled to impose this conception on its citizens. Now this does not mean that governments should refrain entirely from espousing views about human flourishing. Governments often express views about how people can ?best? look after their own interests. But it does imply that governments should be careful about how they advance work (say) as the best way of life-- for example, single unemployed mothers should not be made to feel as though they are doing something wrong by not earning a living in traditional labour markets. Officials may offer guidance about how best to make work pay, but ultimately they should respect the sincere belief of many single unemployed mothers that staying at home to look after their children is the best outcome for them (and their children) [48]. Whilst I hope the above policy prescriptions are plausible, I am aware that they complete only part of the picture. Inevitably there will be more detailed questions about how, more exactly, policies should be defined and implemented. Suppose we believe that people should have access to medical and therapeutic treatment if this will help them recover their responsibility--what if a person continually drops out-- should he or she continue to receive the opportunity? And suppose we believe that the idle should receive income from the State in the hope that they might assume responsibility later--for how long should we make this income available? When do we cut our losses? And if a society does take up the challenge of promoting individual responsibility through work (say)--just how far may it pursue this end? Exactly how much money should be spent and what kind of ?encouragement? is legitimate? It is conceivable that political philosophy may have something useful to say towards the resolution of even these awkward questions. Dworkin, for example, looks to both real and hypothetical insurance markets to determine more precisely the proper division between individual and collective responsibility [49]. More likely, however, is that when we are faced with detailed questions about how responsibilities for social welfare should be divided--where practitioners have to implement principles of distributive justice, achieve political goals and treat individuals with respect, but at the same time respond to the views of voters and the practical demands of efficiency and cost--a variable approach is needed, in which policies are adjusted as and when the balance shifts too far in the direction of any single value. So even if we agree that individual responsibility is important, and for a number of reasons, in the end we may just have to accept the fact that there are no simple answers to questions about which policies we should favour. Let us take stock. We have seen that there are many rationales for pursuing responsibility-catering policies and, furthermore, that these reasons do not always support enforcing consequential responsibility. So, in light of all this, should egalitarians continue to advance principles which are responsibility-catering? Nothing I have said so far implies that egalitarians should abandon responsibility-sensitivity in favour of flat equality. What I have tried to show is merely that what we mean by "responsibility-sensitivity" and "responsibility-catering policy" depends on the particular values in play. Even so, I do think egalitarians should be more aware that fairness is not the sole reason why individual responsibility matters and may not even be the most important reason. However, if egalitarians do take on board this last mentioned point, will those various objections to responsibility-sensitivity set out in Anderson?s article, What is the Point of Equality? continue to be important? My tentative answer is that if egalitarians do accept a pluralistic view of the value of individual responsibility, then egalitarian theory will be susceptible to fewer of the objections put forward by Anderson. Obviously, to investigate this question fully we should consider all of the objections raised by Anderson, but for reasons of brevity I shall try to make do with one illustration. Consider again Anderson?s case of the uninsured driver lying injured at the side of the road. According to Anderson, accepting the requirement of responsibility- sensitivity entails that it would be acceptable to withhold assistance. But I hope I have been able to demonstrate that there is no such entailment. The reason, as we have seen, is that sensitivity to individual responsibility can mean different things depending on which value is most important in any given case. If treatment will help the driver to assume responsibility at a later date, either in the sense that he will be able to repay the debt or at least return to work and purchase insurance against future accidents, then the may well be justified. This raises one final question: what should we do in cases where there is no reason of responsibility to intervene--where someone is responsible for the situation in which he now finds himself and there is no hope of helping him assume responsibility in the future? Reflecting on what I have argued so far I hope the reader will not assume I am suggesting that responsibility is all that matters. I do not think this. A society may have other reasons--let us call them humanitarian reasons--to assist people whose chances of future responsibility are at best slight. That we should relieve suffering and help the stricken is a moral injunction that does not, and need not, rely on the assumption that we are thereby placing people in a better position to assume responsibility later. Sometimes we are not. Nevertheless, arguably it is a matter for each society to decide the relative importance of humanitarian aid and holding people responsible for the consequences of their voluntary choices, and there is no guarantee that any society will decide to give priority to humanitarianism. Alexander Brown, Department of Philosophy, University College London, Gower Street, London WC1E 6BT, UK. alex.brown at ucl.ac.uk Acknowledgements Much of this paper derives from my University of London Ph.D. research on individual responsibility. In this respect I am indebted to Jonathan Wolff, not only for his many useful observations and suggestions but also for his patient support. In addition to this I must record my gratitude to Alex Voorhoeve for his helpful thoughts on early drafts of this paper and Richard Arneson for his encouragement of my inquiry. Finally, I wish to thank two anonymous reviewers for their valuable comments. NOTES [1] See J. Rawls (1982) Social unity and primary goods, in A. Sen and B. Williams (eds) Utilitarianism and Beyond (Cambridge, Cambridge University Press), and R. Dworkin (1981) What is equality? Part 1: equality of welfare, Philosophy and Public Affairs, 10, pp. 185-246. [2] See R. Dworkin (1981) What is equality? Part 2: equality of resources, Philosophy and Public Affairs, 10, pp. 283-345, and J. Rawls (1996) Political Liberalism (New York, Columbia University Press), esp. pp. 181-182n.9. [3] R. Nozick (1974) Anarchy, State and Utopia (Oxford, Blackwell), p. 149. [4] See F. A. Hayek (1960) The Constitution of Liberty (London, Routledge), ch. 15, and (1976) Law, Legislation, and liberty Volume 2: The Mirage of Social Justice (Chicago, University of Chicago Press), ch. 9. [5] Dworkin, Equality of resources, p. 293. [6] Ibid., p. 311. [7] For an interesting discussion of this distributive bias see I. M. Young (1990) Justice and the Politics of Difference (Princeton, Princeton University Press). [8] R. Malthus (1992) An Essay on the Principles of Population (Cambridge, Cambridge University Press), p. 101. [9] C. Murray (1984) Losing Ground (New York, Basic Books), p. 180. [10] Forward in D. Blunkett (2001) Towards Full Employment in a Modern Society (Norwich, HMSO), p. vi. [11] See V. George and P. Wilding (1994) Welfare and Ideology (New York, Harvester Wheatsheaf), pp. 31-32. [12] E. Anderson (1999) What is the point of equality? Ethics, 109, pp. 295-298. [13] Ibid., p. 312. [14] For further analysis of this argument see K. Arrow (1971) A utilitarian approach to the concept of equality in public expenditures, Quarterly Journal of Economics, 85, p. 409. [15] See, for example, R. Arneson (1990) Is work special? Justice and the distribution of employment, American Political Science Review, 84, p. 1132. [16] S. Darwall (1977) Two kinds of respect, Ethics, 88, pp. 36-49. [17] Commission on social justice (1993) The Justice Gap (London, Institute for Public Policy Research), p. 16. [18] J. Rawls (1971) A Theory of Justice (Oxford, Oxford University Press), p. 440. [19] Rawls, Political Liberalism, p. lix. [20] Ibid. [21] See D. Schmidtz (1998) Taking responsibility, in D. Schmidtz and R. Goodin (eds) Social Welfare and Individual Responsibility (Cambridge, Cambridge University Press), pp. 93-4. [22] See Dworkin, Equality of welfare. [23] Rawls, Political liberalism, p. 186. [24] See G. A. Cohen (1989) On the currency of egalitarian justice, Ethics, 99, pp. 906-944, and R. Arneson (1989) Equality and equal opportunity for welfare, Philosophical Studies, 56, pp. 77-93. [25] B. Landesman (1983) Egalitarianism, Canadian Journal of Philosophy, 13, p. 36. [26] Ibid., p. 37. [27] D. Willets (1992) Modern Conservatism (London, Penguin), p. 150. [28] See, for example, J. S. Mill (1994) Principles of Political Economy (Oxford, Oxford University Press), p. 132. [29] Quoted in R. E. Goodin (1998) Reasons for Welfare (Princeton, Princeton University Press), p. 336. [30] Dworkin, Equality of resources, p. 327. [31] See Rawls, Political Liberalism, pp. 181-182n.9, and (2001) Justice as Fairness: A Restatement (Cambridge, Mass., Harvard University Press), p. 179. [32] Dworkin, Equality of resources, p. 326. [33] Anderson, op. cit. pp. 316-318, 321, 328. [34] Ibid., pp. 323-4. [35] R. Arneson (1997) Egalitarianism and the undeserving poor, The Journal of Political Philosophy, 5, pp. 327-350. [36] A. Levine (1998) Rethinking Liberal Equality (New York, Cornell University Press). [37] See P. Van Parijs (1991) Why surfers should be fed: the liberal case for an unconditional income, Philosophy and Public Affairs, 20, pp. 101-131, and (1995) Real Freedom for All (Oxford, Oxford University Press). [38] T. Hinton (2001) Must egalitarians choose between fairness and respect? Philosophy and Public Affairs, 30, pp. 72-87. [39] Anderson op. cit. pp. 295-6. [40] For one version of this view see R. Arneson (1997) Postscript to Equality and equal opportunity for welfare, in L. Pojman and R. Westmoreland (eds) Equality (New York, Oxford University Press), p. 239. I ignore here Anderson?s recent objection that: "In adopting mandatory social insurance schemes for the reasons they offer, luck egalitarians are effectively telling citizens that they are too stupid to run their lives, so Big Brother will have to tell them what to do. It is hard to see how citizens could be expected to accept such reasoning and still retain their self-respect." Anderson op. cit. p. 301. [41] Aesop (1996) Aesop?s Fables (London, Penguin), p. 11. [42] A similar differentiation of perspectives can be found in Schmidtz op. cit., p. 6. [43] L. Mead (1997) From welfare to work, in A. Deacon (ed) From Welfare to Work (London, IEA Health and Welfare Unit), p. 20. [44] Catriona McKinnon has recently defended unconditional basic income for all precisely on this basis. See C. McKinnon (2003) Basic income, self-respect and reciprocity, Journal of Applied Philosophy, 20, p. 148. [45] For more on this see J. Wolff (1998) Fairness, respect, and the egalitarian ethos, Philosophy and Public Affairs, 27, pp. 97-122. A similar point is made by Arneson in his is work special? p. 1133. [46] On Rawls? interpretation, the aim of keeping people as close as possible to the ideal of citizens as normally functioning and fully co-operating members of society is no less directed at people whose functioning falls below that ideal due to lifestyle choices, than at any other citizen. So, even where an individual is responsible for his or her own misfortune, the social contract argument generates principle of justice which direct the State to restore people by medical or psychiatric treatment as required. See Rawls, Political Liberalism, p. 185n.15, and Justice as Fairness, p. 175. Much less clear is what Rawls thought about the costs of securing basic health care for all and whether they should fall equally on society at large or in differing degrees to specific individuals and groups of individuals depending on their degree of responsibility. But whatever he thought about this, it is difficult to interpret Rawls as claiming that hospitals can turn people away with justice, if it is ascertained that they have no insurance and cannot afford to pay. [47] See Rawls, A Theory of Justice, p. 440, and Political Liberalism, p. lix. [48] Anderson reaches much the same conclusion: op. cit. pp. 323-324. [49] See Dworkin, Equality of resources, and (2000) Sovereign Virtue (Cambridge, Mass., Harvard University Press), ch. 9. From checker at panix.com Mon Jul 18 00:16:10 2005 From: checker at panix.com (Premise Checker) Date: Sun, 17 Jul 2005 20:16:10 -0400 (EDT) Subject: [Paleopsych] Next Book: Genius in the Making Message-ID: Genius in the Making http://www.nextbook.org/cultural/feature.html?id=181 5.7.14 If a scientific theory about Jews being smart is so politically incorrect, why aren't more people complaining? BY David Berreby During the persecutions that marked the last millennium of European history, many Jews refused forced conversion to Christianity. Instead, they died as martyrs al kiddush hashem, in sanctification of the Name. Others, of course, preferred to live. Tradition accords them no honor. It appears to be human nature to celebrate those who choose death over a convenient change of identity. But not everyone is prepared to kill or die for "my people," and we can disagree about what "my people" means and what its interests are. But no one finds the notion of dying for identity to be incomprehensible. Why is this so? What's the source of these religious, ethnic, national, and cultural identities that inspire and terrify us all? They could reflect real differences in measurable quantities, like skin color or height. Or the boundaries between one people and another could be rooted in the mind-consequences of the way we're taught to perceive ourselves and other human beings. Both perspectives-call them objectivist and subjectivist-have respectable histories dating back centuries. In Aristotle's time, the objectivist tradition led him to explain that Asians were flighty and Northern Europeans stupid, but the Greeks were just right because they were shaped by the temperate climate. Today, it causes some to believe there's a reason why Kenyans run faster than Inuit-and that reason explains why these two groups really are distinct. This kind of thinking gives us the rules of thumb we use to make sense of strangers in ordinary life: Because he is an X, he will do Y. The subjectivist tradition, on the other hand, says the difference between Kenyans and Inuit is in the eye of the beholder. That doesn't mean they're imaginary or easily wished away, but it does mean these differences begin in the circumstances and mentality of the observer. After all, while Kenyans and Inuit are different in many measures, they are also, judged on other criteria, exactly alike. They are members of the same species; fathers and mothers; non-Western peoples coping with a Western-dominated world. To a subjectivist, the world doesn't naturally divide into peoples. It is divided-by observers, whose categories will change depending on their purposes. This view led Aristotle (who was all over the map) to note that people are shaped by life with other people, in a process that never ceases to change. The Enlightenment philosopher David Hume noted how readily people change-both on the scale of historical time (18th-century Greeks were not the same as ancient Greeks, he said) and at the scale of an individual life (as when an average man becomes an elite soldier, because he has joined a high-prestige unit and molded himself to its ways). The subjectivist outlook has been derided as postmodernism, a label that connotes faddish and fancy academic footwork, remote from real life. Yet this view, too, is common sense. It is what tells us not to be guided by stereotypes: just because she is an X, doesn't mean she can't do Y. It lets us envisage a politics in which the future is better than the past. That's important in any democracy. In the United States, it is how, for example, Italians, once regarded as non-white, were incorporated into the American mainstream. And it is how the descendants of Jewish immigrants entered the elite institutions that once excluded their grandparents. In practical affairs, in other words, most people use both the objectivist and subjectivist interpretations of ethnic, religious, national, and cultural categories. Like physicists who decide whether to regard light as particles or waves, depending on the experiment they're performing, most Americans deal with ethnic-cultural information in whatever mode suits the occasion. Yet these two ways of understanding aren't really compatible, so people who have thought systematically about these questions tend to take a side. Surely, says the objectivist, if many people say, "Jews are smart," it's because Jews are measurably intelligent. Well, answers the subjectivist, people used to say Jews were particularly well-suited to play basketball. (A Daily News sportswriter put it this way in the 1930's: "The reason, I suspect, that basketball appeals to the Hebrew with his Oriental background is that the game places a premium on an alert, scheming mind, flashy trickiness, artful dodging and general smart aleckness.") So let us not be too sure that today's talk has got free of today's narrow circumstances. There are good reasons to feel ambivalent about this endless debate. The emancipation of Diaspora Jews over the past two centuries of Western history could only take place in a subjectivist context. Everyone's shared humanity is more real and certain, that view tells us, than the changeable, supposedly important differences that let us tell one ethnic group from another. In fact, Jewish thinkers contributed enough to the spread of this subjectivist view that anti-Semites from Goebbels to the Islamist Sayyid Qutb portrayed the thought of Karl Marx, Sigmund Freud, and Franz Boas and others as a specifically Jewish attack on the Gentiles. Then too, the idea of intrinsic Jewish traits has been a tool of anti-Semitic propaganda for centuries. Still, what would identity be without a sense of indelible peoplehood? Without the feeling that Jewishness is a fact, not a choice or circumstance, it would be hard to make sense of politics, culture, or one's own family. And that sense of factuality requires facts: Statements about what is and isn't Jewish, which don't depend on changing circumstances. Many Jews were pleased to hear about the 1997 paper, published in Nature, in which Karl Skorecki and his colleagues showed that the DNA of many kohanim carries a distinct marker of descent from an ancestor who lived several thousand years ago. People like to feel definite about their identity, so they welcomed news that the genetic level of analysis seemed to confirm the cultural tradition of priestly descent from Aaron. That got a lot more attention than the fact that some 30 percent of kohanim lack the genetic marker; and that many Iraqi Kurds possess it. That paper exemplifies a trend that will make the objectivist-versus-subjectivist problem more urgent in the future. Not long ago, doctors and physical anthropologists could deal with the genetics of Jewish populations without touching on cultural and religious issues such as the nature of stereotypes about smart Jews or the effects of Diaspora history. Sociologists could study the history of Ashkenazic Jews in America without confronting claims about genes. This peaceful coexistence is coming to an end. Subjectivist and objectivist thinkers are increasingly converging on the same issues and problems. A striking case in point turned up last month, when The New York Times and other news outlets announced the imminent publication of a objectivist research paper, by Gregory Cochran, Jason Hardy, and Henry Harpending of the University of Utah, which proposed (a) that Ashkenazic Jews are said to be smart because they really are; (b) that this is so because Ashkenazim spent nearly a thousand years in Europe in a social "niche" that required brains-traders and financiers in a society that forbade them to be farmers or soldiers; (c) that certain genes found among Ashkenazim are the cause of their high intelligence; and (d) that these same genes are also responsible for a cluster of hereditary diseases, including Tay-Sachs, Gaucher's, and breast cancer, that are more common among Ashkenazim than they are in other groups. It's a fascinating and disquieting paper, not least because it gives no easy comfort to the objectivist school. If you would like to think that Jewish smarts are a real biological fact, the paper will side with you. But only about the Ashkenazim, which might put a dent in your joy, especially if you happen to be Sephardic. The paper says Ashkenazic history created a legacy of unusual abilities (that's flattering and it's pleasant to think there's a solid explanation). But the reason it gives is the role of Jews as financiers and moneylenders (which reminds you of stereotypes that are none too pleasant to contemplate). The Times piece was written by Nicholas Wade, a first-rate science writer who is, I think, sympathetic to the objectivist camp. Wade gave the paper a well-spun launch, prominently featuring a quotation from another scientist with a objectivist tilt, Steven Pinker of Harvard. "It would be hard to overstate how politically incorrect this paper is," Pinker said. It was a savvy preemptive move-one favored by media-wise scientists of the objectivist school. By warning that the ideas to follow might offend the "politically correct," Pinker gives them the allure of forbidden truths and protects against counterarguments. After all, the implication is that if you reject this paper's arguments, you're doing a kneejerk. In this case, however, Pinker didn't need to protect the paper from a PC assault. It got a friendly reaction from the media that covered it. It was, for several days, on the Times' most-emailed list. And in fact, its senior author, Harpending, told me last month that he had not received a single phone call or email of the "how-could-you?" variety. Why? One reason is simple: Most regular people love objectivist explanations of human differences. They're so pleasantly straightforward, and they seem, somehow, to confirm your intuitions. Never mind that those same intuitions tell you that the earth is flat and that the sun revolves around us. On top of that, science journalism, despite each article's inevitable quotation or two from someone reminding the reader that these matters are complex and culture is involved, inevitably favors the "objectivist" school. The form of the newspaper, news website, or blog presumes that source, reporter, and reader agree upon the nature of facts and causes. But the subjectivist perspective requires that you think about points of view, and how perceptions, including perceptions of fact, are affected by history and experience. News articles tell you that facts exist, and that these facts have causes, which consist of other facts. That's an easy fit for an idea like "Jews are smart because for centuries they worked at jobs that require brains." But it doesn't work so well for a thought like: "People we call Jews today are related to, but not the same as, people called Jews in 1300, and, by the way, IQ tests that measure intelligence in the 21st century weren't around in the 14th, so we have to think about what we mean by intelligence, and how it is measured." So the largely favorable journalism about objectivist ideas shouldn't mislead anyone into thinking these ideas are stronger and truer. Of course, this doesn't mean they're invalid, either. So how should to think about this paper, and the wave of largely favorable coverage it got? First, it's important to realize that the real-or-mental issue isn't going away. In fact, the debate over group differences-what are they? where do they come from? which ones matter?-is becoming ever more practical as pharmaceutical companies seek racially and ethnically specific drugs. The FDA recently approved the first one, BiDil, a medication for treating heart disease in African-Americans. Second, neither the objectivist nor subjectivist school of thought consists of stupid people. For example, people who believe that IQ tests measure something real and consistent do not believe this means every single person of Ashkenazic descent is supersmart. Every fact about populations is a matter of probability-a claim that someone in Group A is X percent more likely to score Y than a generic human-not an all-or-nothing assertion. Conversely, subjectivists aren't emotional Stalinist morons, too afraid of facts to allow a debate. They are scientists and other intellectuals whose doubts about the notion of intrinsic, unchanging traits are well-founded. Everyone realizes now that the question is not Nature or Nurture, but Nature and Nurture. The fights are about how we tell them apart, and how they interact. Third, and most importantly, today's attempts to link levels of analysis and cross disciplinary lines represent an opportunity, not a threat. Engaging with alien-sounding objectivist and subjectivist versions of what it means to be Jewish is a way to understand the issues better, and to tell speculation apart from anti-Semitism. Among those who have claimed to study Jewish history from a Darwinian perspective is the psychologist Kevin MacDonald of California State University at Long Beach, who, in a trilogy of books, has interpreted Jewish tradition as an "evolutionary strategy" that gives Jews an advantage in competition for resources with Gentiles. MacDonald cites Freud and Marx (and Star Trek!) as examples of Jews promoting universalism to get an edge over Gentiles. "A multicultural society in which Jews are simply one of many tolerated groups is likely to meet Jewish interests," he wrote in 1998. MacDonald claims, then, that even the atheist and antireligious Marx served an eternal Jewish drive to win out over gentiles. Judith Shulevitz, writing in Slate in 2000, noted the obvious similarity of this supposedly new biological argument to familiar tropes of anti-Semitism. (The occasion was MacDonald's testimony on behalf of the Holocaust denier David Irving.) Though the recent Utah paper shares Darwinian language (genes, evolution, IQ tests) with MacDonald's books, it is, in substance, very different. The Utah researchers define who they are talking about-Ashkenazim-much more precisely. They name the genes that, they believe, are explained by their theory. And they say why they think intelligence would be selected for in this population, and why that effect could explain facts about genetic disease. Their ideas, in other words, are specific and novel, where MacDonald's are sweeping and familiar from centuries of anti-Semitic propaganda. To see the difference-to avoid the mistake of thinking all evolutionary thinking is crudely reductionist or racist-you have to engage it. Not taking sides in the vapid Nature-Nurture war saves your attention for the real controversies taking place within disciplines. The Utah paper's ideas aren't daring forbidden knowledge; nor are they stalking horses for prejudice. The paper merits some thought. And if you find yourself comfortably pleased by it, or comfortably disdainful of it, you probably want to think about it some more. David Berreby has written for the New York Times Magazine, The New Republic, Slate, The Sciences, Smithsonian, and Discover. His book Us and Them: Understanding Your Tribal Mind will be published in October. He also has a blog. From checker at panix.com Mon Jul 18 00:16:17 2005 From: checker at panix.com (Premise Checker) Date: Sun, 17 Jul 2005 20:16:17 -0400 (EDT) Subject: [Paleopsych] Laird Wilcox: Selected Quotations for the Ideological Skeptic Message-ID: Selected Quotations for the Ideological Skeptic - Free Downloadable Texts Civil Liberties, Individual Rights, Freedom of Expression, Propaganda, Persuasion, Deception, Rationality, Rhetoric, Skepticism, Logic, Fanaticism, Dogmatism, Ideological Thinking, Political & Social Psychology Free downloadable PDF books. Please feel free to circulate this list. The Writer's Rights Over 1,600 Selected Quotations on Freedom of Expression, Civil Liberties and Individual Rights Compiled by Laird Wilcox. 136 pp. ISBN 0-9761337-4-1. http://www.overalltech.net/pub/Quotes5Writer_sRights.pdf Propaganda, Persuasion & Deception Over 1,125 Selected Quotations for the Ideological Skeptic Compiled by Laird Wilcox. 124 pp. ISBN 0-9761337-0-9. 2005. http://www.overalltech.net/pub/Quotations-Propaganda.pdf Rationality, Rhetoric, Skepticism & Logic Over 835 Selected Quotations for the Ideological Skeptic Compiled by Laird Wilcox. 80 pp. ISBN 0-9761337-3-3. 2005. http://www.overalltech.net/pub/Quotations-Rationality.pdf Fanaticism, Dogmatism & Ideological Thinking Over 1,050 Selected Quotations for the Ideological Skeptic Compiled by Laird Wilcox. 95 pp. ISBN 0-9761337-1-7. 2005. http://www.overalltech.net/pub/Quotations-Fanaticism.pdf Political & Social Psychology & Behavior Over 1,250 Selected Quotations for the Ideological Skeptic Compiled by Laird Wilcox. 121 pp. ISBN 0-9761337-2-5. 2005. http://www.overalltech.net/pub/Quotations-PoliticalPsych.pdf Laird Wilcox mailto:lwilcox at aol.com 1-913-829-0609 From checker at panix.com Mon Jul 18 00:16:35 2005 From: checker at panix.com (Premise Checker) Date: Sun, 17 Jul 2005 20:16:35 -0400 (EDT) Subject: [Paleopsych] Gary North: The Significance of the Scopes Trial Message-ID: Gary North: The Significance of the Scopes Trial http://www.lewrockwell.com/north/north394.html 5.7.12 On July 10, 1925, the culturally most important trial in American history began: Tennessee vs. John Scopes. It was the first trial to be covered on the radio. Hundreds of reporters showed up in Dayton, Tennessee, from all over the world. The monkey trial became a media circus. The trial ended on July 24. William Jennings Bryan died in Dayton on July 26. With this, the American fundamentalist movement went into political hibernation for half a century, coming out of its sleep fifty-one years later in the Ford-Carter Presidential race. There is a great deal of confusion about the details of the trial, but not its fundamental point: the legitimacy of teaching Darwinism in tax-funded schools, kindergarten through high school. On this point, all sides agree: the trial was a showdown between Darwinism and fundamentalism. What is not recognized is the far greater importance of the far more important underlying agreement, an agreement that had steadily increased for half a century by 1925 and still prevails: the legitimacy of tax-supported education. What I write here is a summary of a lengthy, heavily footnoted chapter in my 1996 book, [9]Crossed Fingers: How the Liberals Captured the Presbyterian Church. That book is [10]on-line for free. So is the chapter: "[11]Darwinism, Democracy, and the Public Schools." THE ORIGINS The origins of the trial are generally unknown. It was begun as a public relations stunt by a group of Dayton businessmen. They had heard of the challenge by the American Civil Liberties Union (ACLU) regarding a test case for the Tennessee law against teaching evolution in the public schools. They thought that if they could get someone in Dayton to confess to having taught evolution in the local high school, the town would get a lot of free publicity. We can hardly fault their assessment of the potential for free publicity - monetarily free, that is. Scopes agreed to be the official victim. The irony is this: he was not sure that he had actually taught from the sections of the biology textbook that taught Darwinism. Had he been put on the witness stand and asked by the defense if he had taught evolution, he would have had to say he did not recall. He was never put on the stand. Also forgotten is the content of the textbook in question. The Wikipedia encyclopedia entry has refreshed our memories. The textbook, like most evolution textbooks of the era, was committed to eugenics and a theory of racial superiority. The textbook declared: "Although anatomically there is a greater difference between the lowest type of monkey and the highest type of ape than there is between the highest type of ape and the lowest savage, yet there is an immense mental gap between monkey and man. At the present time there exist upon the earth five races or varieties of man, each very different from the others in instincts, social customs, and, to an extent, in structure. These are the Ethiopian or negro type, originating in Africa; the Malay or brown race, from the islands of the Pacific; the American Indian; the Mongolian or yellow race, including the natives of China, Japan and the Eskimos; and finally, the highest type of all, the Caucasians, represented by the civilized white inhabitants of Europe and America." (pp. 195-196). ". . . if such people were lower animals, we would probably kill them off to prevent them from spreading. Humanity will not allow this, but we do have the remedy of separating the sexes in asylums or other places and in various ways of preventing intermarriage and the possibilities of perpetuating such a low and degenerate race. Remedies of this sort have been tried successfully in Europe and are now meeting with success in this country." (pp. 263-265). This was the wisdom of high school biology textbooks, circa 1925. The ACLU came to its defense. This information had to be brought to the children of Tennessee, the ACLU decided. THE STRATEGY The city's merchants did very well from the influx of media people who could not resist seeing William Jennings Bryan take on Clarence Darrow. The ACLU's strategy was to lose the case, appeal it, get it confirmed at the appellate court level, and appeal it to the U.S. Supreme Court, which they believed would overturn it. And why not? This was the Court that, two years later, determined that the state of Virginia had the right to sterilize a mentally retarded woman, without her knowledge or consent that this was the operation being performed on her. While she had a daughter of normal intelligence, this had no bearing on the case in the joint opinion of eight of the nine members of the Court. In the words of Oliver Wendell Holmes, Jr., who wrote the Court's opinion: "[12]Three generations of imbeciles are enough." Bryan offered to pay Scopes' fine. Both sides wanted conviction. Darrow threw the case. He told the jury it had to convict, which it promptly did. The ACLU hit an iceberg. The Dayton decision was overturned by the appellate court on a legal technicality. The case could not reach the Supreme Court's docket. Sometimes judges are more clever than ACLU attorneys expect. THE REAL CAUSE OF THE TRIAL Beginning with the publication of his book, In His Image in 1921, Bryan began calling for state laws against the teaching of Darwinism in tax-funded schools. What is not widely understood was his motivation. It was ethical, not academic. Bryan understood what Darwin had written and what his cousin Francis Galton had written. Galton developed the "science" of eugenics. Darwin in [13]The Descent of Man (1871) referred to Galton's book favorably. Also, Bryan could read the full title of Darwin's original book: [14]On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. Bryan was a populist. He was a radical. In terms of his political opinions, he was the most radical major party candidate for President in American history, i.e., further out on the fringes of political opinion compared with the views of his rivals. Clarence Darrow had no advantage with respect to championing far-left political causes. Bryan had read what Darwin had written, and he was appalled. He recognized that a ruthless hostility to charity was the dark side of Darwinism. Had Darwin's theory been irrelevant, he said, it would have been harmless. Bryan wrote: "This hypothesis, however, does incalculable harm. It teaches that Christianity impairs the race physically. That was the first implication at which I revolted. It led me to review the doctrine and reject it entirely." In [15]Chapter 4, Bryan went on the attack. He cited the notorious passage in Darwin's Descent of Man (1871): With savages, the weak in body or mind are soon eliminated; and those that survive commonly exhibit a vigorous state of health. We civilized men, on the other hand, do our utmost to check the process of elimination; we build asylums for the imbecile, the maimed, and the sick; we institute poor-laws; and our medical men exert their utmost skill to save the life of every one to the last moment. There is reason to believe that vaccination has preserved thousands, who from a weak constitution would formerly have succumbed to small-pox. Thus the weak members of civilised societies propagate their kind. No one who has attended to the breeding of domestic animals will doubt that this must be highly injurious to the race of man." (Modern Library edition, p. 501) He could have continued to quote from the passage until the end of the paragraph: "It is surprising how soon a want of care, or care wrongly directed, leads to the degeneration of a domestic race; but excepting in the case of man himself, hardly any one is so ignorant as to allow his worst animals to breed" (p. 502). It is significant that Darwin at this point footnoted Galton's 1865 Macmillan's magazine article and his book, [16]Hereditary Genius. Beginning that year, Bryan began to campaign in favor of state laws against teaching evolution in tax-funded schools. He did not target universities. He knew better. That battle had been lost decades before. He targeted high schools. A dozen states had introduced such bills. Tennessee passed one. The Establishment recognized the threat. It saw that its monopoly over the curriculum of the public schools was its single most important political lever. So did Bryan. Bryan was targeting the brain of the Beast. He had to be stopped. Across America, newspapers and magazines of the intellectual classes began the attack. I survey this in my chapter, citing from them liberally - one of the few things liberal that I do. The invective was remarkable. They hated Bryan, and they hated his fundamentalist constituency even more. Yet the Democrats had nominated his brother for Vice President less than a year earlier. His brother had developed the first political mailing list in history, and the Democrats wanted access to it. Bryan wrote in a 1922 New York Times article (requested by the Times, so as to begin the attack in response): The Bible has in many places been excluded from the schools on the ground that religion should not be taught by those paid by public taxation. If this doctrine is sound, what right have the enemies of religion to teach irreligion in the public schools? If the Bible cannot be taught, why should Christian taxpayers permit the teaching of guesses that make the Bible a lie? This surely was a legitimate question, one which has yet to be answered in terms of a theory of strict academic neutrality. But Paxton Hibben, in his 1929 biography of Bryan (Introduction by Charles A. Beard), dismissed this argument as "a specious sort of logic. . . . [Tax-funded] schools, he reasoned, were the indirect creations of the mass of citizens. If this were true, those same citizens could control what was taught in them." If this were true: the subjunctive mood announced Paxton's rejection of Bryan's premise. Bryan had to be stopped. They stopped him. The most famous reporter at the trial was H. L. Mencken. That Mencken was drawn to Dayton like a moth to a flame is not surprising. He hated fundamentalism. He also loved a good show, which the trial proved to be. But there was something else. He was a dedicated follower of Nietzsche. In 1920, Mencken's translation of Nietzsche's 1895 book, [17]The Antichrist, was published. Bryan had specifically targeted Nietzsche in [18]In His Image. "Darwinism leads to a denial of God. Nietzsche carried Darwinism to its logical conclusion." Mencken was determined to get Bryan if he could. Two months before the trial, Mencken approached Darrow to suggest that Darrow take the case. In a [19]2004 article posted on the University of Missouri (Kansas City) website, Douglas Linder describes this little-known background. Mencken shaped, as well as reported, the Scopes trial. On May 14, 1925, he met Darrow in Richmond, and - according to one trial historian - urged him to offer his services to the defense. Hours after discussing the case with Mencken, Darrow telegraphed Scopes's local attorney, John Randolph Neal, expressing his willingness to "help the defense of Professor Scopes in any way you may suggest or direct." After Darrow joined the defense team, Mencken continued to offer advice. He told defense lawyers, for example, "Nobody gives a damn about that yap schoolteacher" and urged them instead to "make a fool out of Bryan." THE STAKES Both sides accepted the legitimacy of the principle of tax-funded education. Both sides were determined to exercise power over the curriculum. But there was a fundamental difference in strategies. Bryan wanted a level playing field. The evolutionists wanted a monopoly. Bryan's defeat did not get the laws changed in the three states that had passed anti-evolution laws. It did get the issue sealed in a tomb for the rest of the country. The evolutionists made it clear during the war on Bryan that democracy did not involve the transfer of authority over public school curriculums to political representatives of the people. The New York Times (Feb. 2, 1922) ran an editorial that did not shy away from the implications for democracy posed by an anti-evolution bill before the Kentucky legislature. The Times repudiated democracy. It invoked the ever-popular flat-earth analogy. "Kentucky Rivals Illinois" began with an attack on someone in Illinois named Wilbur G. Voliva, who did believe in the flat earth. Next, it switched to Kentucky. "Stern reason totters on her seat when asked to realize that in this day and country people with powers to decide educational questions should hold and enunciate opinions such as these." To banish the teaching of evolution is the equivalent of banishing the teaching of the multiplication table. Three days later, the Times followed with another editorial, appropriately titled, "Democracy and Evolution." It began: "It has been recently argued by a distinguished educational authority that the successes of education in the United States are due, in part at least, 'to its being kept in close and constant touch with the people themselves.' What is happening in Kentucky does not give support to this view." The Progressives' rhetoric of democracy was nowhere to be found in the Times' articles on Bryan and creationism, for the editors suspected that Bryan had the votes. For the Progressives, democracy was a tool of social change, not an unbreakable principle of civil government; a slogan, not a moral imperative. Though often cloaked in religious terms, democracy was merely a means to an end. What was this end? Control over other people's money and, if possible, the minds of their children. In the Sunday supplement for February 5, John M. Clarke was given an opportunity to comment on the Kentucky case. He was the Director of the State Museum at Albany. His rhetoric returned to the important theme of the weakness of democracy in the face of ignorant voters. I cite the piece at length because readers are unlikely to have a copy of this article readily at hand, and when it comes to rhetoric, summaries rarely do justice to the power of words. It began: Our sovereign sister Kentucky, where fourteen and one half men in every hundred can neither read nor write, is talking about adding to the mirth of the nation in these all too joyless days by initiating legislation to put a end to that "old bad devil" evolution. Luther threw an ink bottle at one of his kind; the Kentucky legislators are making ready to throw a statute which will drive this serpent of the poisoned sting once and for all beyond the confines of the State, and not a school wherein this mischiefmaker is harbored shall have 1 cent of public moneys. The issue was democratic control over tax-funded education. Mr. Clark was against any such notion. When the majority of the voters, of which fourteen and a half out of each hundred can neither read nor write, have settled this matter, if they are disposed to do the right thing they will not stop at evolution. There is a fiction going about through the schools that the earth is round and revolves around the sun, and if Frankfort [Kentucky] is to be and remain the palladium of reason and righteousness, this hideous heresay [heresy] must also be wiped out. Here it was again: the flat earth. It has been a favorite rhetorical device used against biblical creationists for a long time. The claim that pre-Columbus medieval scholars regarded the earth as flat, it turns out, is entirely mythical - a myth fostered in modern times. Jeffrey Burton Russell, the distinguished medieval historian, has disposed of this beloved myth. The story was first promoted by American novelist Washington Irving. The modernists who have invoked this myth have not done their homework. Because Bryan was a great believer in tax-funded education, he entered the fray as just one more politician trying to get his ideas fostered in the schools at the expense of other voters. He professed educational neutrality. His opponents professed science. He lost the case in the courtroom of public opinion. THE AFTERMATH Bryan won the case and lost the war. The international media buried him, as they had buried no other figure in his day. His death a few days later in Dayton sealed the burial. A year later, liberals captured both the Northern Presbyterian Church and the Northern Baptists. Bryan had a leader in the Northern Presbyterian Church, running for moderator and barely losing in 1923. The tide turned in 1926. In the mainline denominations, the conservatives began to lose influence. In a famous 1960 article in Church History, "The American Religious Depression, 1925-1935," Robert Handy dated the beginning of the decline in church membership from the Scopes trial. Handy taught at liberal Union Theological Seminary in New York City. In 1980, Joel Carpenter wrote a very different article in the same journal: "Fundamentalist Institutions and the Rise of Evangelical Protestantism." He pointed out that Handy had confined his study to the mainline denominations. In 1926, he said, an increase in membership and church growth began in the independent fundamentalist and charismatic churches. The fundamentalists began to withdraw from the mainline churches. What Handy saw as decline, Carpenter saw as growth. Both phenomena began in response to the Scopes trial. Fundamentalists began to withdraw from national politics and mainstream culture. The roaring twenties were not favorable times for fundamentalists. Their alliance with the Progressives began to break down. [20]This alliance had gotten the eighteenth amendment passed. By the time Prohibition was repealed in 1933, the fundamentalists had begun their Long March into the hinterlands. Only in the 1976 Presidential election did they begin to re-surface. In 1980, they came out in force for Reagan. Two events mark this transformation, neither of which receives any attention by historians: the "Washington for Jesus" rally in the spring of 1980 and the "National Affairs Briefing Conference" in Dallas in September. CONCLUSION The Scopes trial was a media circus. The play and movie that made it famous three decades later, Inherit the Wind, was an effective piece of propaganda. The website of the law school of the University of Missouri, Kansas City, offers [21]a good introduction to the story of this trial. But this version has a hard time competing with the textbook versions and the documentaries. The victors write the textbooks. These textbooks are not assigned in Bryan College, located in Dayton, Tennessee - or if they are, they are not believed. There is no Darrow College. Gary North [s[22]end him mail] is the author of [23]Mises on Money. Visit [24]http://www.freebooks.com. He is also the author of a free multi-volume series, [25]An Economic Commentary on the Bible. [26]Gary North Archives References 9. http://www.amazon.com/exec/obidos/tg/detail/-/0930464745/lewrockwell/ 10. http://www.freebooks.com/docs/243a_47e.htm 11. http://www.freebooks.com/docs/html/gncf/Chapter07.htm 12. http://caselaw.lp.findlaw.com/cgi-bin/getcase.pl?court=us&vol=274&invol=200 13. http://www.amazon.com/exec/obidos/ASIN/0140436316/lewrockwell/ 14. http://www.amazon.com/exec/obidos/ASIN/0674637526/lewrockwell/ 15. http://www.scopestrial.org/inhisimage.htm 16. http://www.amazon.com/exec/obidos/tg/detail/-/1417947705/lewrockwell/ 17. http://www.amazon.com/exec/obidos/ASIN/1884365205/lewrockwell/ 18. http://www.amazon.com/exec/obidos/tg/detail/-/1417912812/lewrockwell/ 19. http://www.law.umkc.edu/faculty/projects/ftrials/scopes/menckenh.htm 20. http://www.mises.org/journals/jls/9_1/9_1_5.pdf" 21. http://www.law.umkc.edu/faculty/projects/ftrials/scopes/scopes.htm 22. mailto:gary at kbot.com 23. http://www.lewrockwell.com/north/mom.html 24. http://www.freebooks.com/ 25. http://www.demischools.org/economic-bible-commentary.html 26. http://www.lewrockwell.com/north/north-arch.html From checker at panix.com Mon Jul 18 00:17:01 2005 From: checker at panix.com (Premise Checker) Date: Sun, 17 Jul 2005 20:17:01 -0400 (EDT) Subject: [Paleopsych] Meme 044: The Collapse of the SSSM (Standard Social Science Model) Message-ID: Meme 044: The Collapse of the SSSM (Standard Social Science Model) sent 5.7.17 Communism collapsed mighty rapidly, from the victory of Solidarity in Poland in 1989 June to the opening of the Berlin Wall in November that year, just five months later. (The USSR itself lingered on until 1991 December 8.) One unremarked miracle is that well-nigh irrepressible urge of politicians to make hot air was suppressed: I remember no speeches from Communist politicians about how beneficial communism is, how moral it is, how it should be given another try. We are witnessing an equally sudden collapse of the Standard Social Science Model, the idea that human social organization has little or nothing to do with human biology. The SSSM holds that the human brain is a domain-general learning mechanism (also called a blank slate (tabula rasa after John Locke)), rather than a network of domain-specific mechanisms that guide learning in ways that were practical in our evolutionary past. If the SSSM is true, then people can learn almost anything and culture can establish almost any rules that social scientists want. (The power implications should be obvious.) What kept communism going was that it suppressed its critics, though the time of fervent belief had long since passed. By the fall of the Berlin wall there were more truly-believing communists in West German universities than in the whole of East Germany. Criticism of the SSSM had been building ever since the work of Nikolaus Tinbergen and Konrad Lorenz in the comparative study of animal behavior ("ethology") in the 1950s suggested application to human societies, leading to E.O. Wilson's _Sociobiology_ in 1975. Still, in most university departments in sociology and the humanities (not so much in psychology), the SSSM reigns by severely sanctioning those who inject biology into their studies, on the grounds of racism or reductionism. What is happening now is the rapid fall of suppression (though not the hot air). The last five months have witnessed publication in major journals on the hereditability of political attitudes, the superior intelligence of Ashkenazi Jews, the different cognitive abilities among races generally (this was recently really taboo), and rehabilitation of individual responsibility (striking, since the SSSM ignores free will along with biology). These articles would not have been published in major journals just a few years ago. The Berlin Wall in academia having fallen, there is a flood of them. But the best evidence, perhaps, is that the article I recently sent, Different Emotional Reactions to Different Groups: A Sociofunctional Threat-Based Approach to "Prejudice," by Cottrell, Catherine A., and Neuberg, Steven L., _Journal of Personality and Social Psychology_, 2005 May, does indeed invoke sociobiology in explaining the wide varieties of prejudice as evolutionary adaptations, as opposed to purely cultural ones. The amazing thing is that paper is really a Big Mac journal article: "Virtually all research articles have a predictable format--review of the literature, hypotheses, methodology, results, tables, interpretation, conclusion, footnotes and references. [He forgot to add plea for more funding.] Reading the typical American research article offers the same kinds of gratification as eating a Big Mac for lunch. The sociologist knows exactly what to expect and where each component of the article will be found, just as the customer knows that the Big Mac will include a bun, burger, pickle, relish, and "special sauce," as well as where each element is to be found if one cared to deconstruct the burger. There is great satisfaction in knowing precisely what can be expected in one's lunch and in what reads before, after, or even with that lunch. Since they are both highly rationalized, a Big Mac and the typical research article in an American journal go well together at lunchtime. (In contrast, it would be ludicrous to try to read the latest, non-rationalized books of Pierre Bourdieu or Jurgen Habermas over such a lunch.) It is nice to know that there will be no big surprises--Big Macs and research articles almost always deliver precisely what is expected, no less, but also--and most tellingly and damningly--no more" (George Ritzer, _The McDonaldization Thesis__ (London: Sage Publications, 1998), p. 40). Having read the article--well, not thoroughly: only the author and the peer reviewers give these articles a thorough reading--I can report that the whole backdrop of the actual experiment with college students could just as easily been couched in SSSM or even Freudian terms as in sociobiological ones. The experiment itself only showed that there are different kinds of prejudice, depending on the kinds of threats the object of prejudice is seen to have. It seems quite surprising that this variability had not been studied before, but it's easy to get an article placed if you make conduct a slightly different experiment and thoroughly follow the Big Mac article procedures. What's remarkable, as far as the collapse the forces that suppressed critics of the SSSM goes, is that authors can now get away with attacking the SSSM and substituting the rhetoric of evolutionary psychology. I use the term rhetoric, even though I buy into e.p. explanations generally, fully aware of their limitations. Being better than the SSSM does not mean perfection. I also realize that the social layer is to some extent (in some cases, a great extent) independent at least in the short term from biology. There is no reduction of Die gro?e Fuge to Beethoven's synapses. Biology constrains what can happen and how fast society can change. I won't say much about the political fallout, since political lag is the severest form of culture lag. We're seeing the rise of Big God--here we really need sociological explanations, for biological explanations speak of a "god gene" (no, the biological explanations are not *that* reductive) but not of *changes* in the political landscape over a few decades--and may be seeing the rise of Big White, the first due to the failure of secularists to come up with an adequately strict morality suitable to carry individuals through periods of rapid change, the latter in reaction to Affirmative Action programs and to immigration, lately of Muslims with their competing Big God issues. I'm going to concentrate on the sudden collapse of the SSSM and send out articles relevant to this historic event. This will come at the expense of most of what I've been sending out. (I badly need to catch up with my own forwardings and have 300 e-mails backed up. Let me know if there is something you very specifically want a response to. You often just send casual remarks. I do not intend to ignore my critics, in fact, I thrive on them. So please repeat what you most want me to respond to. Otherwise, I'll assume you have figured out your own answers!) I'll drop most of the short-term political stuff and the merely routine complaining. I'll drop most of the stuff on technological advances. (Send me an e-mail for my own best sources.) I will continue to watch for thoughtful articles on long-term social trends, such the Americanization of the world and the resistance to same. Below are some paragraphs I've grabbed on the SSSM. Harold Fromm: The New Darwinism in the Humanities http://www.hudsonreview.com/frommSpSu03.html The orthodoxy that triggers revolt for Cosmides and Tooby can be represented by a remark by Emile Durkheim from 1895, a sentiment whose influence shaped the social sciences for almost a century: Collective representations, emotions, and tendencies are caused not by certain states of the consciousness of individuals but by the conditions in which the social group, in its totality, is placed. Such actions can, of course materialize only if the individual natures are not resistant to them; but these individual natures are merely the indeterminate material that the social factor molds and transforms. [Emphasis added by Cosmides and Tooby.] From this are generated the two most powerful themes of The Adapted Mind: the Standard Social Science Model, or SSSM, and the blank slate: The Standard Social Science Model requires an impossible psychology. Results out of cognitive psychology, evolutionary biology, artificial intelligence, developmental psychology, linguistics, and philosophy converge on the same conclusion: A psychological architecture that consisted of nothing but equipotential, general-purpose, content-independent, or content-free mechanisms could not successfully perform the tasks the human mind is known to perform or solve the adaptive problems humans evolved to solvefrom seeing, to learning a language, to recognizing an emotional expression, to selecting a mate, to the many disparate activities aggregated under the term learning culture. . . . Although most psychologists were faintly aware that hominids lived for millions of years as hunter-gatherers or foragers, they did not realize that this had theoretical implications for their work. More to the point, however, the logic of the Standard Social Science Model informed them that humans were more or less blank slates for which no task was more natural than any other. The appeal of the SSSM is that it provides a rationale for social engineering and political correctness, for promulgating such egalitarian absurdities as the doctrine that there are no substantive psychological differences between the sexes, a doctrine that has finally run its course. Or as Cosmides and Tooby put it, A program of social melioration carried out in ignorance of human complex design is something like letting a blindfolded individual loose in an operating room with a scalpelthere is likely to be more blood than healing. Rhetorically asking how it is possible for pre-linguistic children to deduce the meanings of the words they hear when they are in the process of learning their local language for the first time, they reply that infants powers of interpretation must be supplied by the human universal metaculture the infant or child shares with adults by virtue of their common humanity, in other words, their evolved nature. ------------------- The description of evolutionary psychology from the latest edition of Microsoft's Encarta Encyclopaedia: INTRODUCTION Evolutionary Psychology, the notion that the human mind is the product of evolution and has therefore developed innate psychological mechanisms that are typical of the human species. This relatively new field of study stands in marked contrast to the standard social science model (SSSM) which has tended to portray the human mind as a general-purpose computer to be programmed by random, culture-specific determinants (the "blank slate" thesis). This new branch of psychology grew out of developments in the late 20th century in a number of quite disparate disciplines including evolutionary biology, palaeoanthropology, and cognitive psychology. Evolutionary psychologists have used findings from each of these fields of research to argue that a universal human nature lies just below the surface of cultural variability. While the SSSM emphasizes the flexibility of human learning, social environment, and random cultural processes, evolutionary psychologists believe that such flexibility consists of a number of tendencies to learn particular skills and to do so at various, specific ages. Evolutionary psychologists do not dispute the importance of learning, but attempt to explain the process in terms of innate mechanisms. Likewise, evolutionary psychology stresses the importance of culture, but, rather than defining culture as a random force, it sees it as the way in which humans are aided in acquiring skills that potentially enhance fitness and, therefore, the abili ty to survive longer than others and produce fit offspring. --------------------- From: Paul Voestermans To: Date: Apr 13 2000 - 5:16am Cultural psychology meets evolutionary psychology. Toward a new role of biology in the study of culture and experience Paul Voestermans NCPG, department of cultural psychology, University of Nijmegen, the Netherlands. P.O.Box 9104 6500 HE Nijmegen The Netherlands voestermans at psych.kun.nl Cor Baerveldt NCPG, department of cultural psychology, University of Nijmegen, the Netherlands. P.O.Box 9104 6500 HE Nijmegen The Netherlands baerveldt at psych.kun.nl Paul Voestermans & Cor Baerveldt NCPG, department of cultural psychology, University of Nijmegen, the Netherlands The basics of evolutionary psychology Evolutionary psychologists are much concerned with the behavior generating principles in the brain, which come into existence under the pressure of adaptive problems in the environment. They want to get rid of the Standard Social Science Model (SSSM) of the social and behavioral sciences, as defend for example by Geertz (1973) and Montagu (1964). That model is a relict from thoughts entertained already long before Darwin. At that time the human mind was not considered to be part of nature. It was a pre-given device (from divine origin) which stood open to the outside world and took its content from the social world. The evolutionary psychologists are convinced that this assumption is still a vital part of the SSSM. Central to this model is the idea that the mind operates on the basis of the free social construction of its content. This idea lies also at the basis of a few incorrect presuppositions with respect to culture, so the evolutionary psychologists argue. The most import one is that culture is somehow transmitted to a brain that functions as a "general purpose machine". To this machine belong the abilities to learn and to imitate others. General intelligence and rationality belong to it as well. The idea is that these functions are all free of content. Let us quote what the evolutionary psychologist have to say on this score: "all of the specific content of the human mind originally derives from the outside -from the environment and the social world - and the evolved architecture of the mind consists solely or predominantly of a small number of general purpose mechanisms that are content-independent, and which sail under names such as "learning", "induction", "intelligence", "imitation", "rationality", "the capacity of culture" or simply "culture?"" (Cosmides & Tooby, Internet Primer, 1997, p. 3). What is the alternative proposed by the evolutionary psychologists? In the course of evolution a few regulative, functionally specialized circuits in the brain have been devised. They are designed for the execution of behaviors which are functionally organized around adaptive problems our stone age forebears encountered. There is some convergence on the part of neuroscientists, evolutionary biologists, and cognitive psychologists on the issue of how the brain as a physical system processes information in order to generate certain behaviors. This convergence aims at the understanding in terms of "computations" and information processing of a variety of behaviors ranging form perception, cognitive functioning (Cosmides & Tooby, 1994) to sex and mating behavior (Symons, 1979) and a variety of social psychological phenomena (Simpson & Kenrick, 1997). Those who adopt the SSSM have assumed too readily that: "all significant aspects of adult mental organization are supplied culturally". Linking the production of culture solely to "general purpose learning mechanisms or content- independent cognitive processes" denies the relationship between biology and psychology and suggests too strongly that human being are instinctually ?underprepared?. Learning becomes too much of a "window through which the culturally manufactured pre-existing complex organization outside of the individual manages to climb inside the individual" (Tooby en Cosmides,1992, p. 30). Content-specific brain mechanisms are neglected. ------------------ http://www.ling.upenn.edu/courses/hum100/intro.html The standard social science model: "A set of assumptions and inferences about humans, their minds, and their collective interaction ... that has provided the conceptual foundations of the social sciences for nearly a century..." In fact, these assumptions and inferences have been largely shared with the humanities and the arts, at least in the academic world. ...one would be strangely mistaken about our thought if ... he drew the conclusion that sociology, according to us, must, or even can, make an abstraction of man and his faculties. It is clear. . . that the general characteristics of human nature participate in the work of elaboration from which social life results. But they are not the cause of it, nor do they give it its special form; they only make it possible. Collective representations, emotions, and tendencies are caused not by certain states of the consciousness of individuals but by the conditions in which the social group, in its totality, is placed. Such actions can, of course, materialize only if the individual natures are not resistant to them; but these individual natures are merely the indeterminate material that the social factor molds and transforms. Their contribution consists exclusively in very general attitudes, in vague and consequently plastic predispositions which, by themselves, if other agents did not intervene, could not take on the definite and complex forms which characterize social phenomena. (Emil Durkheim, 1895). Sketch of the arguments behind the SSSM, taken from Cosmides and Tooby, "The Psychological Foundations of Culture," in Barkow, Cosmides and Tooby, The Adapted Mind (1992): Rapid historical change and spontaneous "cross-fostering experiments" dispose of the racist notion that intergroup behavioral differences are genetic. Infants everywhere have the same developmental potential. Althoughs infants are everywhere the same, adults everywhere differ profoundly in their behavioral and mental organization. Therefore, "human nature" (the evolved structure of the human mind) cannot be the cause of the mental organization of adult humans, their social systems, their culture, etc. Complexly organized adult behaviors are absent from infants. Whatever "innate" equipment infants are born with must therefore be viewed as highly rudimentary -- an unorganized set of crude urges or drives, along with a general ability to learn. Infants must acquire adult mental organization from some external source in the course of development. The external source is obvious: this organization is manifestly present in the behavior and the public representations of other members of the local group. "Cultural phenomena are in no respect hereditary but are characteristically and without exception acquired." "Undirected by culture patterns -- organized systems of significant symbols -- man's behavior would be virtually ungovernable, a mere chaos of pointless acts and exploding emotions, his experience virtually shapeless" (Geertz 1973). This establishes that the social world is the cause of the mental organization of adults. The cultural and social elements that mold the individual precede the individual and are external to the individual. The mind did not create them; they created the mind. They are "given, and the individual finds them already current in the community when he is born." (Geertz 1973). The causal flow is overwhelmingly or entirely in one direction: the individual is the acted upon and the sociocultural world is the actor. Therefore, what complexly organizes and richly shapes the substance of human life -- what is interesting and distinctive and worthy of study -- is the variable pool of stuff that is referred to as 'culture". But what creates culture? Culture is not created by the biological properties of individual humans -- human nature. Rather, culture is created by some set of emergent processes whose determinants are realized at the group level. The sociocultural level is a distinct, autonomous and self-caused realm. "Culture is a thing sui generis which can be explained only in terms of itself .. Omnis cultura ex cultura." (Lowie 1917). Alfred Kroeber "The only antecedents of historical phenomena are historical phenomena." Emil Durkheim "The determining cause of a social fact should be sought among the social facts preceding it and not among the states of individual consciousness." Geertz "Our ideas, our values, our acts, even our emotions, are, like our nervous system itself, cultural products -- products manufactured, indeed, out of tendencies, capacities, and dispositions with which we were born, but manufactured nonetheless." (1973). Therefore, the SSSM denies that "human nature" -- the evolved architecture of the human mind -- can play any notable role as a generator of significant organization in human life... In so doing, it removes from the concept of human nature all substantive content, and relegates the architecture of the human mind to the narrowly delimited role of embodying "the capacity for culture." From this perspective, the choice of "human nature" as the inaugural theme for the Humanities Forum is surprising and even shocking. -------------------- What is Experimental Economics? by Vernon Smith http://www.ices-gmu.org/article.php?id=368&print=1 The first concept of a rational order derives from today's standard social-economic science model (SSSM) going back to the seventeenth century. The SSSM is an example of what Hayek has called, constructivist rationalism, which, in its modern forms and power, stems from Descartes, who believed and argued that all worthwhile social institutions were and should be created by conscious deductive processes of human reason. Truth is derived and derivable from premises that are obvious and unassailable. Thus, in positive economics it has been argued influentially that you judge the validity of a model by its predictions, not by its assumptions--a methodology that provides limited guidance in experimental studies where one can control the economic environment and institutional rules. In economics the SSSM leads to rational predictive models of decision that motivate research hypotheses that experimentalists have been testing in the laboratory since mid twentieth century. The test results are decidedly mixed, and this has motivated constructivist extensions of game theory, most notably based on other-regarding, in addition to own-regarding, preferences, and on 'learning'--the idea that the predictions of the SSSM might be approached over time by trial-and-error adaptation processes. For tractability, Cartesian rationalism provisionally requires agents to possess complete information - far more than could ever be given to one mind. In economics the resulting analytical exercises, while yielding insightful theorems, are designed to aid and sharpen thinking - if-then parable. Yet, these exercises may not approximate the level of ignorance that has conditioned institutions, as abstract rules independent of particular parameterizations that have survived as part of the world of experience. The temptation, of course, is to ignore this reality, because it is poorly understood, and to proceed in the implicit belief that our parables capture what is most essential in understanding what we observe. ---------------- The biology of culture 23 November 2000, 808 words Kevin Baldeosingh http://www.caribscape.com/baldeosingh/social/sober/2000/culture2.html One of the more depressing traits of human beings is their readiness to accept obvious falsehoods and, concomitantly, an almost equal readiness to reject provable truths. Evolutionary theory is an example of the latter; an example of the former is the widely-held belief that culture determines human nature. In his book Human Universals, the anthropologist Donald Brown writes, "...the proposition that nature and culture are two distinct phenomenal realms assumes [that] a given trait, behaviour or institution is either cultural or it is natural, there is nothing in-between. In any form, this proposition ignores the obvious truth that, whatever the analytical validity of distinguishing nature from culture, the latter must come from the former. Folk beliefs notwithstanding, there is no alternative to this materialist tenet." Most people's belief systems are adopted on the basis of heritability - i.e. whatever their parents and/or peers happen to believe. So what the average persons believes is determined, not by logic or proof, but by convenience and comfort. Evolution explains why this is so; culture does not. That is, we have evolved to believe whatever "truth" best ensures our genetic survival, not whatever is really true, since the real truth may sometimes not serve our interests. (Indeed, a good test of how objective you truly are is to consider whether you have any deep beliefs which you would prefer not to hold.) This belief in the power of culture to shape human nature is particularly pernicious because it permeates all strata of society, from the academic to the political to the folk. Ordinary people believe that anyone who has a different skin colour, language, religion, even eating habits, is not only different, but fundamentally different from them. Politicians believe that people can be persuaded to believe anything, if told it often enough and loud enough. The academic viewpoint, which has been dubbed the Standard Social Science Model (or SSSM), was articulated by American anthropologist Margaret Mead in 1935 in this way: "...human nature is almost unbelievably malleable, responding accurately and contrastingly to contrasting cultural conditions...The members of either or both sexes may...be educated to approximate [any temperament]." Mead's conclusions were based on her studies of the Samoan islanders, whose nonchalant sexual habits, she claimed, made them satisfied and their society crime-free; and on the Tshambuli, who had reversed sex roles, with the men wearing make-up and curls and, she said, having gentle natures as a result. Fact is, later studies showed that the Samoans had sexual jealousy and rape like any other society, and the Tshambuli men were wife-beaters and treated homicide as a milestone in a young man's life which entitled him to wear the face-paint Mead thought was so feminine. The ethnographic evidence overwhelmingly shows that a universal human nature does exist. Brown lists nine pages of traits common to all known societies, such as prestige, gossip, humorous insults, rhetoric, terms distinguishing male and female, sexual regulations, kinship terms, property, rules proscribing violence and rape, and many more. This is hardly surprising. Human beings all have the same genomes. Genes build bodies and bodies build brains and brains build minds. Ergo, human beings are basically the same in the Amazon rainforests and the metropolitan cities. Differences in behaviour, beliefs and habits - i.e. culture - are mainly the result of ecology, geography and technology. Yet, although both the logic and the empirical evidence show that human nature is not as malleable as the SSSM insists, academics and other intellectuals still write as though it is. There are several reasons for this obduracy, not least of which is social scientists promoting their own professional agenda. The SSSM is the secular catechism of educated people, who are too intellectually advanced not to see the obvious contradictions of religion, but who are not emotionally advanced enough to reject the religious mindset. Culture assumes the aspect of a God, to whom one can pray in order to bring about fundamental changes in the reality of the world. Concomitantly, the attraction of the SSSM also lies in its implication that one can wield power over others since, if you can control their culture, you can control their minds. This idea is what lies behind calls to legislate local music quotas, as well as the advertising blitz for the 2000 general election. Cultural determinism can only work if the human mind is a tabula rasa or, as some commentators believe, if the average person is at the mental and emotional level of a two-year-old. But, even if the latter were true, it would provide no comfort to those who think people's attitudes can be shaped by cultural fascism: most children learn to say "No" before "Yes". That little fact alone helps prevent me despairing over the future of our species. [I am sending forth these memes, not because I agree wholeheartedly with all of them, but to impregnate females of both sexes. Ponder them and spread them.] From anonymous_animus at yahoo.com Mon Jul 18 20:10:50 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Mon, 18 Jul 2005 13:10:50 -0700 (PDT) Subject: [Paleopsych] threats in groups In-Reply-To: <200507171800.j6HI0cR08721@tick.javien.com> Message-ID: <20050718201051.15853.qmail@web30806.mail.mud.yahoo.com> >>What events signal to individuals that the functioning of their group may be compromised? Because groups enhance individual success by providing members with valuable resources, members should be attuned to potential threats to group-level resources such as territory, physical security, property, economic standing, and the like.<< --Which brings up the issue of what happens when group members who are threatening to the long term health of the group are rewarded for a single contribution, as in an abusive male rewarded for being able to punish an enemy, or a member of a corporation who brings in a lot of money while undermining the integrity of the whole. Or a group that colludes with each other, forgives each other's "sins" in order to stay powerful as a group, to the detriment of the tribe. Only when the tribe can meet the needs that corrupt members meet in order to avoid punishment can dysfunctional groups be dissolved. Otherwise you have people who are very dysfunctional carrying the flag/bible, posturing as defenders of the tribe to avoid being seen as a problem themselves. michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From checker at panix.com Tue Jul 19 01:17:14 2005 From: checker at panix.com (Premise Checker) Date: Mon, 18 Jul 2005 21:17:14 -0400 (EDT) Subject: [Paleopsych] BBC: Relationships: Couples: Is it over? Message-ID: Relationships: Couples: Is it over? http://www.bbc.co.uk/cgi-bin/education/betsie/parser.pl Falling out of love can sometimes be just as easy as falling in love. Working out whether it's just a phase or if your relationship has reached the end of the line is one of life's hardest decisions. Relationship psychotherapist [1]Paula Hall asks the difficult questions. The pros and cons When people try to decide if their relationship's over, they often find themselves weighing up the pros and cons. On the pros side they put all their partner's positive character traits, the happy memories and the advantages of being together. On the cons they list all the things they don't like about their partner, the painful memories and the reasons why living together sometimes feels impossible. The problem with this system is that they're never measuring like for like. For example, when listing personal qualities, how many negatives would it take to counteract being an excellent mother? And how many happy memories does it take to outweigh an affair? Unfortunately, there's no formula and no conclusive tests when it comes to deciding whether your relationship's over. All you can do is ask yourself some difficult, soul-searching questions and see what the answers bring. Is love enough? Love means different things to different people and at different stages of their lives, so can it be relied on in the decision-making process? For example, one woman may spend years in an abusive relationship, saying "I love him," while another will walk away from a seemingly idyllic marriage because she's no longer "in love". Love can sometimes blind us to the reality of what we really have. And although it's difficult, we can choose to love someone and we can choose to stop loving them. As well as being a feeling, love is something we do. Do you like your partner? Before you can love someone, you have to like them. If you enjoy being with your partner, agree with how they think and behave, and share the same dreams in life, you're doing well. If your partner is also someone whom you respect, trust and feel affection for, you have all the basics for love to grow. Can you communicate? All relationships hit problems at one time or another; the key to overcoming them is communication. Within your relationship, there needs to be a genuine capacity for sharing and expressing your thoughts and feelings in a way that feels OK for you both. There also need to be ways to [10]resolve conflict and for you both to address any unmet needs. Is change possible? If there's a particular issue that makes you want to leave, you first need to consider whether it's possible to make changes to resolve the problem. Is the problem something you can let go, or is it fundamental to your happiness? If it's the former, you have to ask yourself if you can change; if it's the latter, can your partner do the changing? If your partner doesn't agree that there's a problem, they won't change. If they do agree and are willing to change, you have to decide whether you believe they have the capacity to change. Is it too late? There's no doubt that some situations do get better with time. Even the most painful betrayals can become less significant if there's an ability to forgive and move on. But if either you or your partner has been hanging on to a grudge for years and there's no indication that the pain has eased at all, you may decide it's too late for a resolution. Another indication that it may be too late to save the relationship is if one of you has already started to develop a life that excludes the other. This might include a change in career or lifestyle, or starting another relationship that you don't want to end. If this is the case, then even though you haven't made a verbal decision to end the relationship, it may be that emotionally you've already left. Further help Deciding to end a relationship is extremely difficult and not a decision to be taken quickly or lightly. Many people find that talking through their thoughts and feelings with a counsellor can help. To find out more, see [11]Do you need counselling? Recommended reading Too Good to Leave, Too Bad to Stay: A Step-by-Step Guide to Resolving Your Relationship by Mira Kirshenbaum (Michael Joseph) From checker at panix.com Tue Jul 19 01:17:21 2005 From: checker at panix.com (Premise Checker) Date: Mon, 18 Jul 2005 21:17:21 -0400 (EDT) Subject: [Paleopsych] BBC: Time to switch off and slow down Message-ID: Time to switch off and slow down http://news.bbc.co.uk/go/pr/fr/-/2/hi/technology/4682123.stm Published: 2005/07/14 10:44:27 GMT By Kevin Anderson BBC News website At a hi-tech conference bristling with bloggers constantly checking messages on Blackberries, smartphones, laptops and handheld computers, it is odd to hear a speaker suggest an e-mail free day. But journalist Carl Honor? told attendees of the TED conference in Oxford they should unplug and slow down in a world that was stuck in fast- forward. And for a wired world accustomed to having nearly unlimited information and the boundless choices of online shopping, it seems almost heretical to suggest that the infinite possibilities of the modern world leave us less satisfied instead of more. But author Barry Schwartz told the conference that it was better when we had only a few choices of salad dressing instead of the 175 at his local supermarket. These were just some of the suggestions to the audience at TED in their search for the good life. TED (Technology, Entertainment and Design) brings together experts in design, technology, and entertainment to share their ideas about our futures. 'Roadrunner culture' We live in a world where instant gratification is not fast enough, in a world of not only speed dating, but even of speed yoga, said Mr Honor?. The author of In Praise of Slowness decided to decelerate after he found himself speed reading bedtime stories to his son. He even found himself excited when he read in the newspaper a story about one-minute bedtime stories. Some choice is better than none, but more choices don't make things better Barry Schwartz But he caught himself: "Has it really come to this that I'm ready to fob off my son with a sound bite at the end of the day?" People point to urbanisation, consumerism and globalisation as the cause of this "roadrunner culture", he said, but it is more fundamental. "In our society, time is a scarce resource," he said. "We turn everything in race with the finish line but we never reach that finish line." But around the world, there is a backlash against this culture, such as the slow food and slow city movement in Italy. Across the world, people are slowing down, and they are finding that they "eat better, make love better, exercise better, work better". And Mr Honor? told a crowd flush with technology that they needed to rediscover the off button. Technology was supposed to make us more efficient, he explained. But our lives are often so driven by interruptions that a recent report on "info-mania" found that the flood of e-mails was such a distraction that it cut workers IQ by 10 points. One department at software firm Veritas has declared Friday e-mail free, and it found that the day has become its most productive. More choice is less satisfying Continuing the theme that less is more, author and scholar Barry Schwartz challenged the orthodoxy that to maximise freedom and welfare we should maximise choice. It is such a deeply embedded assumption that no one questions it, said Mr Schwartz, who explored the idea in his book, The Paradox of Choice. He pointed to his local supermarket where he has a choice of 175 salad dressings. 40 toothpastes, 75 ice teas, 230 soups and 285 varieties of cookies. Choice is good, he said, but in modern, affluent societies most people are confronted with a bewildering array of choices that leads to paralysis. He said that his students sometimes become stuck in low-wage jobs because they fear making the wrong choice of career. Some professors at liberal arts colleges now joke that they "take students who would have been stuck working at McDonalds and makes them people who are stuck working at Starbucks". With so many options confronting us about almost every decision, there is a greater chance that we will regret the decision we do make. The myriad choices raise our expectations and create the anticipation of perfection. Regret after making the wrong decision or what is perceived as the wrong decision leads to self-blame, depression and, in extreme cases, suicide, he said. We are bad at realising the downside of choice. "Some choice is better than none, but more choices don't make things better," he argued. From checker at panix.com Tue Jul 19 01:17:28 2005 From: checker at panix.com (Premise Checker) Date: Mon, 18 Jul 2005 21:17:28 -0400 (EDT) Subject: [Paleopsych] ST News: The biological path to freedom Message-ID: The biological path to freedom http://www.stnews.org/articles.php?article_id=646&category=books [Anees is one of the few Muslim transhumanists] Ronald Bailey's Liberation Biology shows the biological path to freedom, says Munawar A. Anees. By Munawar A. Anees (July 15, 2005) Why is biotechnology in the eye of the public storm? One reason lies in its very nature: it empowers us to tinker with the building blocks of life. Its potentially irreversible effect on the human genome makes us pause to reflect. In Liberation Biology, Ronald Bailey, a science writer for Reason magazine, questions assumptions and conclusions made by biotechnology detractors, or bioconservatives. He argues that bioconservative criticisms of the biotech revolution are more political gimmick than scholarly endeavor. Conservative biotech Bailey addresses major issues in biotechnology: longevity, disease control, stem cell research, cloning, designer babies, agricultural biotechnology and mind improvement drugs. Bailey openly dismisses bioconservative arguments, saying their "fears are vastly exaggerated; their ethical objections to biotechnological progress are largely misconceived; and the biotech revolution rather than diminishing human dignity and liberty will instead enhance and enlarge them." He takes leading bioconservatives to task, writing that the "future toward which the biotech revolution is taking humanity is in fact almost the exact opposite of the Brave New World." Bailey's book covers vast factual ground, with pointed criticism directed at Leon Kass, chairman of the President's Council on Bioethics; Boston bioethicist George Annas, whose idea of "species consciousness" is meant to be a bulwark against Huxley's Brave New World; and Francis Fukuyama, professor at Johns Hopkins University, who fears that modifications to human nature by biotechnology will usher us into the posthuman or transhuman phase of our existence. Countering Fukuyama's somber statement that life extension could exert negative influence upon society, Bailey argues, after Thomas Hobbes' Leviathan, that society exists for the sake of the individual and not vice versa. Religion and biotherapy Bailey shows that prominent Jewish and Christian theologians are in favor of age-retardation technology. After all, Methuselah, one of the patriarchs in Genesis lived 969 years, and in Judaic teachings one of the metaphors for God is life. Thus, writes Bailey, "the highest expression of human dignity and human nature is to try to overcome the limitations imposed on us by our genes, our evolution, and our environment." The teachings of Islam also support age-retardation technology and similar interventions to improve quality of life. The Quran says, "And He has made subservient to you, [as a gift] from Himself, all that is in the heavens and on earth: in this, behold, there are messages indeed for people who think." Now, however, the use of embryonic stem cells is clouding the ethical picture. Bioconservatives consider human embryonic stem cell research immoral. Many Roman Catholics and conservative Protestants hold the view that personhood is endowed at the moment of conception, so a fertilized ovum has inviolable rights. Other religious traditions offer different treatments of the moral status of embryos. In the Talmud, during the first 40 days of gestation the embryo is simply thought of as water. Similarly, the general Islamic consensus on personhood is that it occurs after the completion of three phases of 40 days each. In other words, the end of the first four months in pregnancy marks the beginning of personhood. The moral positions held by the three monotheistic traditions compel us to a solemn discourse on pivotal questions such as the beginning of life, acquisition of personhood and the mystery of self-consciousness. Bailey argues that "since we do define 'person' as the sort of entities that do have brains capable of sustaining a mind, embryos clearly don't qualify." The specter of human cloning Recalling that many of the same people who oppose therapeutic cloning also opposed in vitro fertilization, Bailey concludes that the opposition to stem cell research can be reduced to fear of the unknown. This fear seems to have been exaggerated by sensational media reports on the clandestine cloning of human embryos, such as the alleged cloning carried out by the Raelians. Because of the excitement that accompanies any manipulation of human materials, these hoaxes tend to damage genuine scientific creativity. These deceptive reports have also kept people in the dark about the actual nature of cloning. For instance, there is a widespread misconception that cloning is a means to revive the dead. Bailey predicts that people will slowly accept cloning as a means to overcome infertility and not as a substitute for traditional human reproduction. Even a cursory look at the evolution of the technique over the last five decades should convince us that much of the criticism against biotech fails in the face of contingency of technology. Some are afraid that biotech is assuming the role of a moral arbitrator and should be shelved. But a faith in creative potential and freedom to actualize it would turn biotech into a truly revolutionary instrument of human evolution. Just like the transition from Gutenberg to Web publishing, biotech is causing a fundamental shift in human cognition. Biology and freedom Liberation Biology is a highly invigorating work that succeeds in restoring faith in biotechnology. As a talented science writer, Bailey offers a publication awash with references to dependable scientific research but still accessible to lay readers. Liberation Biology is rightfully about the biological path to freedom. It gives a resounding rebuttal to biological determinism by arguing a case for biology as a technique and not tyranny. Bailey's daring work that inspires readers to take a critical look at our religious and cultural beliefs while they undergo inevitable transformation as biological beings. Munawar A. Anees holds a Ph.D. in biology from Indiana University. He is an advisory editor of the Journal Of Islamic Science And Islamic Studies and author of Islam and Biological Futures: Ethics, Gender, and Technology, a seminal work in Muslim bioethics. Anees is an elected member of the Royal Academy for Islamic Civilization Research, Jordan, and a founding member of the International Society for Science and Religion, Cambridge, England. In February 2002, he was nominated for the Nobel Peace Prize. From checker at panix.com Tue Jul 19 01:17:34 2005 From: checker at panix.com (Premise Checker) Date: Mon, 18 Jul 2005 21:17:34 -0400 (EDT) Subject: [Paleopsych] Science: Brain Under Surveillance: The Microglia Patrol Message-ID: Brain Under Surveillance: The Microglia Patrol http://www.sciencemag.org/cgi/content/full/309/5733/392 Science, Vol 309, Issue 5733, 392-393 , 15 July 2005 [DOI: 10.1126/science.1114852] Luc Fetler and Sebastian Amigorena* Biophysicists and biologists have long worked together to develop tools to analyze the limits of the living world--molecules and cells at one end and whole organisms at the other. In contrast, important intermediate levels of organization, namely the organs and tissues, had received little attention until very recently. For instance, we understand very little about what controls the density, shape, and size of organs, or how cells direct their movements or communicate with each other within tissues. This has been due in part to technical limitations. The environment that cells encounter within tissues can include multiple three-dimensional chemotactic gradients and numerous physical constraints imposed by interactions with the extracellular matrix and with other cells. Such a complex milieu is virtually impossible to reconstitute in vitro. The advent of two-photon microscopy (1) and its use on tissues of living animals is rapidly advancing our understanding of cell behavior and fate within tissues (2, 3). Based on the simultaneous absorption of two photons, the technique allows greater imaging depth and minimal phototoxicity compared to conventional fluorescence microscopy. Whereas other imaging techniques require surgical dissection that can damage tissue, two-photon microscopy allows direct imaging of cells in the undisturbed physiological environment of an intact organ. Two recent studies by Nimmerjahn et al. (4) and Davalos et al. (5) used this powerful imaging approach to examine the activity of microglia, the most abundant immune cell in the brain, in live mice. Microglia comprise ~10% of the cells in the central nervous system. Under pathological conditions such as neurodegenerative disease, stroke, and tumor invasion, these cells become activated, surround damaged and dead cells, and clear cellular debris from the area, much like phagocytic macrophages of the immune system. In healthy mammalian brain tissue, microglia display characteristically elongated cell bodies with spine-like processes that often branch perpendicularly. Until Nimmerjahn and Davalos applied two-photon microscropy to a live and healthy mammalian brain, it was generally thought that microglia are essentially quiescent cells--dormant and nonmotile. But a static state is hardly what was observed. The technique allowed the Nimmerjahn and Davalos groups to transcranially visualize microglia in live animals. Information was recorded from up to 200 m below the brain's surface through a surgically thinned section of skull. Both groups generated transgenic mice whose microglia were fluorescently labeled. The easily detectable cells were observed for several hours in the brains of anesthesized mice. Whereas microglial cell bodies and main branches were stable for hours, their evenly distributed and highly ramified processes were remarkably motile, continuously and randomly undergoing cycles of formation, extension, and withdrawal on time scales of minutes (1.5 m/min). These processes also displayed motile (4 m/min), filopodia-like protrusions that typically formed bulbous tips with an average lifetime of 4 min. Although the function of these tips remains unclear, it is possible that they constitute specialized phagocytosis domains that clear accumulated metabolic products and deteriorated tissue. This high "resting" motility may serve a housekeeping function, enabling microglia to effectively sample and assess the status of the local surroundings and control their microenvironment. The restructuring activity of microglial processes is in sharp contrast to the apparent stability of dendritic processes of surrounding neurons. Microglial processes and protrusions were also observed to directly contact astrocytes, neuronal cell bodies, and blood vessels, suggesting that in healthy brain tissue, microglia communicate with other cortical cells to coordinately monitor the general health of the brain. Both groups also performed laser-induced injury of individual capillaries in the brains of the transgenic mice. Within a few minutes, time-lapse imaging revealed rapid, targeted movements of nearby microglial processes toward the injured site (see the figure). The average velocity of microglial extensions radially impinging on the target site was similar to extension rates during the resting state. Within 30 min after laser ablation, processes of nearby microglia reached the damaged site and appeared to accumulate and fuse together, forming a spherical containment around the damaged area and establishing a potential barrier between healthy and injured tissue (5). Microglia responded to mechanical injury in a similar way. The shielding of injured sites suggests a neuroprotective role for microglia. Furthermore, the early formation of spherical inclusions within the microglial processes suggests immediate phagocytic engulfment and removal of damaged tissue or leaked blood components. Together, these findings confirm the idea that microglia represent the first line of defense against invading pathogens or other types of brain tissue injury. Figure 1 Microglia patrol the brain and shield it from injury. Microglia continually extend (green) and retract (yellow) processes, surveying their immediate environment within the brain. The processes move rapidly toward a site of injury, such as a damaged blood vessel in the brain, in response to the localized release of a chemoattractant (gradient of orange) from the injured sited. Once at the target site, the processes form a barrier to protect healthy tissue. CREDIT: PRESTON HUEY/SCIENCE To identify the molecular signals that mediate this targeted microglial response, Davalos et al. (5) made use of the observation that microglial migration can be induced in cell culture with nucleotides that signal through P2Y receptors expressed at the cell surface. They demonstrate that localized application of ATP to the mouse brain (through either a craniotomy or a small electrode; neither invasive technique itself elicited a response from microglia), which mimics nucleotide release from injured tissue, attracted microglial processes, similar to the microglial response to injury. Apyrase, an ATPase (adenosine triphosphatase) that hydrolyzes ATP and ADP, substantially reduced both the baseline motility of microglial processes as well as their response to laser-induced tissue injury. Furthermore, they showed that activation of P2Y receptors on microglia in the surrounding tissue is necessary for the rapid microglial response toward the injured site. Previous studies showed that extracellular ATP can induce ATP release from astrocytes. ATP also mediates communication between astrocytes and between astrocytes and microglia. This ATP-induced ATP release was essential for attracting microglial processes. Indeed, when the authors applied apyrase and then a nonhydrolyzable ATP analog from a microelectrode, they observed no such rapid microglial response. Applying connexin channel inhibitors, which inhibit ATP release from astrocytes, also blocked the microglial response toward the laser ablation site. Resting motility of microglial processes in the intact brain also seems to be modulated by the same ATP signaling mechanisms that mediate injury-induced responses, because apyrase and connexin channel inhibitors nearly abolished microglial baseline dynamics. These two elegant studies provide direct evidence for the highly dynamic nature of microglia, indicating that the brain is under constant immune surveillance by these cells. In the adult mammalian brain, there is generally little movement of cellular processes, except perhaps for those associated with synaptic plasticity that underlie learning and memory. Microglia are apparently never at physical rest either. Although the development of two-photon microscopy opens new perspectives for the analysis of intact organs, some major technical issues remain. Improvements in the resolution, depth penetration, image acquisition speed, and photon detector sensitivity of the microscopes will enhance our ability to follow intracellular signaling events and cellular traffic in living tissues. Likewise, the generation of mice expressing fluorescent proteins under the control of different cell type-specific promoters or inducible promoters should allow the study of multiple cell functions within intact organs using two-photon microscopy. Accurate methods to quantify image information are also needed. Despite these obstacles, the development of intact organ imaging should continue to have a major impact in biology over the coming years. References 1. W. R. Zipfel, R. M. Williams, W. W. Webb, Nat. Biotechnol. 21, 1369 (2003). [Medline] 2. J. W. Wang, A. M. Wong, J. Flores, L. B. Vosshall, R. Axel, Cell 112, 271 (2003). [Medline] 3. B. J. Bacskai et al., Nat.Med. 7, 369 (2001). [Medline] 4. A. Nimmerjahn, F. Kirchhoff, F. Helmchen, Science 308, [1314] (2005). 5. D. Davalos et al., Nat. Neurosci. 8, 752 (2005). [Medline] 10.1126/science.1114852 The authors are at the Curie Institute, 26 rue d'Ulm, 75245 Paris Cedex 05, France. L. Fetler is in the Laboratoire Physico-Chimie Curie, CNRS UMR 168, Institut Curie, Paris, France. E-mail: luc.fetler at curie.fr S. Amigorena is in the Immunit?? et Cancer, INSERM U365, Institut Curie, Paris, France. E-mail: sebastian.amigorena at curie.fr 10.1126/science.1114852 Include this information when citing this paper. From checker at panix.com Tue Jul 19 01:21:43 2005 From: checker at panix.com (Premise Checker) Date: Mon, 18 Jul 2005 21:21:43 -0400 (EDT) Subject: [Paleopsych] Telegraph: The caveman in all of us Message-ID: The caveman in all of us http://portal.telegraph.co.uk/arts/main.jhtml?xml=/arts/2005/07/10/bomil10.xml&sSheet=/arts/2005/07/10/bomain.htm (Filed: 10/07/2005) Noel Malcolm reviews The Tribes of Britain by David Miles. There is no book so sensible that it cannot be rendered slightly absurd by a publisher's blurb. "At a time of political devolution, immigration, globalisation and European referendums," announces the publicity handout for this book, "The Tribes of Britain provides a timely historical overview of the big issue: 'where have we come from?'" So reading a book by an archaeologist about the early inhabitants of the British Isles will help you to make up your mind about the European Constitution? You might as well study the history of Babylonian astronomy in order to decide what line to take on global warming. It goes on. "Interestingly, Britain has always depended upon immigration even [sic] since the first hunter-gatherers followed the herds of horses and reindeer into this uninhabited frontier..." Well, yes, at a time when Britain was uninhabited, immigration would have played quite a role - assuming that modern science has definitively ruled out the spontaneous generation of babies in cabbage-patches. But what it means to say that Britain has "always depended" on immigration is much less clear; and even if it were clear for the Bronze Age or the Anglo-Saxon period, it would have no necessary implications for policy today. What we are witnessing here is a bad attack of STS - Spurious Topicality Syndrome - a disease which is almost endemic in the publishing world. Journalists may need topical "pegs" on which to hang their daily or weekly articles, but a serious book, intended to last for years, should be able to stand up on its own. This book is - despite its packaging, and despite the occasional clanky humour of its author - a serious attempt to provide an overview of a huge and fascinating subject. David Miles was until recently the Chief Archaeologist at English Heritage; having been on digs in all parts of the British Isles, looking at evidence from the Neolithic period to the Middle Ages, he is an unusually well qualified guide to the early history of these islands. There is much to be learned here about both the latest archaeological techniques and the latest theories. The techniques are fascinating in themselves. We all know that timbers can be dated by tree-rings, but the degree of precision that is now possible is simply astonishing: when a timber circle recently appeared at low tide on the Norfolk coast, archaeologists were able to determine that the oak was cut in the spring of 2,049 BC. One of the experts who studied "Seahenge" also worked out, after minutely examining the timbers, that they bore the imprints of 51 different bronze axe-blades. That is old-style archaeology, practised at new levels of sophistication. But there is also a whole new science of physical and genetic analysis. Give a bone from a Neolithic skeleton to the right scientists, and they will tell you, after analysing the carbon and nitrogen isotopes, what types of foodstuff provided the protein in that person's diet. Better still, give them a tooth, and they may be able to extract the remains of enough soft tissue from inside it to run up a profile of the deceased's DNA. This sort of genetic analysis has made possible the development of some new theories, and the questioning of many old ones. Textbooks used to describe European prehistory in terms of waves of migrating peoples: the Celts moved into the British Isles in the first millennium BC, replacing whoever was here before, and they were in turn replaced (in England, at least) by Angles and Saxons and Danes. It was a game of ethnic musical chairs; no one seems to have inquired too closely into what happened to the people who lost their seats when the music changed. But do entire peoples simply move in, expelling or murdering the previous ones? It can happen, but it seems to be more the exception than the rule. Much more commonly, the underlying population remains in place, and is just assimilated to a newly dominant culture. The way of life may change; the language may change; and the evidence left by such changes can make it look as if one population has given way to a different one. But that is rather as if a future archaeologist, examining the contents of your house, were to decide that the Habitat People who lived there in the 1970s had been driven out by the Ikea People who arrived in the 1990s. This is where the new techniques of genetic analysis come into their own. When researchers recovered DNA from the tooth of an Ice Age huntsman buried in a cave in Cheddar Gorge, they found that it matched the DNA of the history master, and two of his pupils, at the local school. No doubt those Somerset folk had also acquired, in the intervening 9,000 years, many ancestors whose origins lay elsewhere (Celts, Romans, Saxons, Huguenots, and so on). But the underlying continuity is a striking fact, all the more striking because it was unknowable until a few years ago. Unfortunately, though, individual findings such as these are much more clear-cut than any overall pattern can be. Miles notes that there is a sort of Celtic-Germanic gradient running west-east across Britain: the further east an English family comes from, the more likely it is to share its DNA with people from the Netherlands, Denmark and Germany. But, he suggests, this could be the result of long-term interactions on a small scale; it does not necessarily imply any single episode of mass-migration. The early part of this book is full of such uncertainties; but they are the best uncertainties currently available. In the second half of the book, as David Miles canters from the early Middle Ages to the present, his information becomes more definite and less interesting (and less up to date: he seems not to know that recent research has challenged the identification of the Black Death with bubonic plague). There are better books to read on the Reformation, the Industrial Revolution, and so on; Miles apparently thought it was necessary to cover all this ground in order to bring in Huguenots, Afro-Caribbeans and East European Jews, but what he has to say about each of those groups is unoriginal and rather slight. Like an archaeologist in a hurry, the reader should dig down into the early part of this book, leaving the rest of it (and its preposterous blurb) on the spoil-heap. Noel Malcolm's books include 'Aspects of Hobbes' (Clarendon Press) BOOK INFORMATION Title The Tribes of Britain Author David Miles Publisher Weidenfeld & Nicolson, ?20, 480 pp ISBN n/a From checker at panix.com Tue Jul 19 01:21:51 2005 From: checker at panix.com (Premise Checker) Date: Mon, 18 Jul 2005 21:21:51 -0400 (EDT) Subject: [Paleopsych] Independent: When it comes to sex, men are eternal optimists and romantic at heart Message-ID: When it comes to sex, men are eternal optimists and romantic at heart http://news.independent.co.uk/world/science_technology/article299458.ece Published: 16 July 2005 Men say they are as romantic as women but expect to sleep with more partners, according to the biggest online survey of sexual attitudes and gender differences. The results of a survey of 250,000 men and women who completed a detailed psychological questionnaire reveal that many male and female stereotypes are deep-seated and biological. Men tend to have bigger sex drives than women and are more "sexually optimistic". When men were asked how many partners they expect to have over the next five years they averaged 3.4 compared to the female average of 1.9. While women performed consistently better than men at psychological tests involving verbal fluency and locating objects, men tended to do better at "spatial awareness" tasks. Yet the gender differences are not completely distinct. Scientists found that about a fifth of men have typically "female" brains and an equal proportion of women have a mental approach typical of men. Professor John Manning of the University of Central Lancashire said that the on-line survey was unprecedented because it involved people from about 170 countries and six ethnic groups. The survey was carried out for a television series called The Secrets of the Sexes, which begins on BBC1 tomorrow night. Each person had to answer 200 questions about their sexual behaviour and attitudes and carry out a series of simple tests of personality traits and cognitive abilities, such as being able to match the correct angle of lines drawn on a screen. "There are well established sex differences in abilities and behaviours but the question is where do they come from? Are they due to nurture, or nature or are they a mixture of both?" Professor Manning said. The scientists found similar gender differences despite the country of origin or ethnic background of the participants. Men say they are as romantic as women but expect to sleep with more partners, according to the biggest online survey of sexual attitudes and gender differences. The results of a survey of 250,000 men and women who completed a detailed psychological questionnaire reveal that many male and female stereotypes are deep-seated and biological. Men tend to have bigger sex drives than women and are more "sexually optimistic". When men were asked how many partners they expect to have over the next five years they averaged 3.4 compared to the female average of 1.9. While women performed consistently better than men at psychological tests involving verbal fluency and locating objects, men tended to do better at "spatial awareness" tasks. Yet the gender differences are not completely distinct. Scientists found that about a fifth of men have typically "female" brains and an equal proportion of women have a mental approach typical of men. Professor John Manning of the University of Central Lancashire said that the on-line survey was unprecedented because it involved people from about 170 countries and six ethnic groups. The survey was carried out for a television series called The Secrets of the Sexes, which begins on BBC1 tomorrow night. Each person had to answer 200 questions about their sexual behaviour and attitudes and carry out a series of simple tests of personality traits and cognitive abilities, such as being able to match the correct angle of lines drawn on a screen. "There are well established sex differences in abilities and behaviours but the question is where do they come from? Are they due to nurture, or nature or are they a mixture of both?" Professor Manning said. The scientists found similar gender differences despite the country of origin or ethnic background of the participants. From checker at panix.com Tue Jul 19 01:21:58 2005 From: checker at panix.com (Premise Checker) Date: Mon, 18 Jul 2005 21:21:58 -0400 (EDT) Subject: [Paleopsych] Washington University Law Quarterly: The Posse Comitatus Act: A Principle in Need of Renewal Message-ID: The Posse Comitatus Act: A Principle in Need of Renewal by Matthew Carlton Hammond http://law.wustl.edu/WULQ/75-2/752-10.html Volume 75 Number 2 Summer 1997 Cite As 75 Wash. U. L.Q. 953 THE POSSE COMITATUS ACT: A PRINCIPLE IN NEED OF RENEWAL I. INTRODUCTION In response to the military presence in the Southern States during the Reconstruction Era, Congress passed the Posse Comitatus Act[1] ("PCA" or the "Act") to prohibit the use of the Army in civilian law enforcement. The Act embodies the traditional American principle of separating civilian and military authority and currently forbids the use of the Army and Air Force to enforce civilian laws.[2] In the last fifteen years, Congress has deliberately eroded this principle by involving the military in drug interdiction at our borders.[3] This erosion will continue unless Congress renews the PCA's principle to preserve the necessary and traditional separation of civilian and military authority. The need for reaffirmation of the PCA's principle is increasing because in recent years, Congress and the public have seen the military as a panacea for domestic problems.[4] Within one week of the bombing of the federal building in Oklahoma City,[5] President Clinton proposed an exception to the PCA to allow the military to aid civilian authorities in investigations involving "weapons of mass destruction."[6] In addition to this proposal Congress also considered legislation to directly involve federal troops in enforcing customs and immigration laws at the border.[7] In the 1996 presidential campaign, candidate Bob Dole pledged to increase the role of the military in the drug war, and candidate Lamar Alexander even proposed replacing the Immigration and Naturalization Service and the Border Patrol with a new branch of the armed forces.[8] The growing haste and ease with which the military is considered a panacea for domestic problems will quickly undermine the PCA if it remains unchecked. Minor exceptions to the PCA can quickly expand to become major exceptions. For example in 1981, Congress created an exception to the PCA to allow military involvement in drug interdiction at our borders. [9] Then in 1989, Congress designated the Department of Defense as the "single lead agency" in drug interdiction efforts.[10] The PCA criminalizes, effectively prohibiting, the use of the Army or the Air Force as a posse comitatus [11] to execute the laws of the United States. It reads: Whoever, except in cases and under circumstances expressly authorized by the Constitution or Act of Congress, willfully uses any part of the Army or Air Force as a posse comitatus or otherwise to execute the laws shall be fined under this title or imprisoned not more than two years, or both. [12] Though a criminal law, the PCA has a more important role as a statement of policy that embodies "the traditional Anglo-American principle of separation of military and civilian spheres of authority, one of the fundamental precepts of our form of government." [13] Major and minor exceptions to the PCA, which allow the use of the military in law enforcement roles, blur the line between military and civilian roles, undermine civilian control of the military, damage military readiness, and inefficiently solve the problems that they supposedly address.[14] Additionally, increasing the role of the military would strengthen the federal law enforcement apparatus that is currently under close scrutiny for overreaching its authority.[15] Although it seems benign, such an increase in military authority revives fears of past overreaching during the late 1960s.[16] This Note argues that the principle embodied by the PCA should be renewed by rejecting exceptions to the Act and reaffirming the policy behind its inception. This renewal is necessary to preserve the historic division between civilian and military roles, to maintain civilian superiority over the military, to enhance military readiness, and to efficiently attack domestic problems. Part II reviews the historical traditional American fear of a standing army and the circumstances leading to the PCA's passage. Part III discusses the current scope of the PCA and the permissible roles of the military. Part IV explains how exceptions to the PCA endanger its underlying principle. The explanation covers the spectrum of possible exceptions to the PCA: drug interdiction, border duty, and biological and chemical weapons investigations. [17] Part V proposes legislative action to reaffirm the policy of the PCA and to limit to any further exceptions to it. II. PASSAGE OF THE PCA: REAFFIRMATION OF A LONG-STANDING AMERICAN TRADITION The hotly contested presidential election of 1876 directly led to the passage of the PCA,[18] but the principle behind the Act--excluding the military from the civilian sphere--is as old as the United States.[19] Since the writing of the Declaration of Independence, Americans have mistrusted standing armies and have seen them as instruments of oppression and tyranny. [20] Over time, the military has increased its esteem among the populace, but it has always been held separate from civilian government and limited to its focussed goal of military preparedness and national security. [21] This antimilitarism bent of the United States is evident in our foundation documents. [22] The Declaration of Independence decries King George III's use of armies "to compleat works of death, desolation and tyranny . . . totally unworthy . . . of a civilized nation."[23] Specifically, the Signers of the Declaration of Independence attacked the keeping of a standing army in time of peace,[24] the military's independence from the civil control,[25] and the quartering of troops among the population of the colonies.[26] In response to these concerns, the Articles of Confederation limited the role of the military.[27] Specifically, they restricted the raising of armies and the maintaining of naval vessels.[28] They also reserved the appointment of officers, other than the rank of general, to the states, thus lessening the central government's control over the military.[29] In addition to establishing a weak central government, the Articles of Confederation's reliance upon militia for military power was inadequate to meet the needs of the nation.[30] In the Constitution, the Founding Fathers mandated civilian control of the military through the government structure.[31] While allowing for a standing army and the maintenance of a navy,[32] the Constitution restricts military appropriations to two years;[33] designates the President as the Commander-in-Chief, thereby subordinating the military to civilian authority;[34] and empowers Congress to regulate the armed forces.[35] Additionally, the Bill of Rights proscribes the peacetime quartering of soldiers in private homes[36] and provides for the states to have a well-regulated militia as a counterbalance to a national army.[37] Fear of a standing army helped to motivate the enactment of the Bill of Rights beyond the specific amendments relating to the military.[38] By guaranteeing individual rights in the First Amendment[39] and freedom from unreasonable search and seizure in the Fourth Amendment,[40] it was that the abuses of the British army could be prevented in the new republic.[41] The Founding Fathers recognized that the military's authoritarian nature, while effective in defending democracy, remains antithetical to the basic tenets of democracy. [42] According to this reasoning, "[s]kepticism and criticism" of the military are "absolute requisites of freedom" that are missing from every unfree nation.[43] Fear of the military seemed to have been forgotten until the mid-1800s, when the events leading up to enactment of the PCA began prior to the Civil War. The Fugitive Slave Act of 1850 allowed federal marshals to call on the posse comitatus to aid in returning a slave to his owner. [44] In the context of the Fugitive Slave Act, Attorney General Caleb Cushing issued an opinion defining the posse comitatus to include the military even if entire units had to be called upon while remaining under the direction of their own officers. [45] This use of the military by federal marshals became common; in Kansas, for example, federal troops were used to quell disorder between pro- and anti-slavery factions.[46] The post-Civil War military presence in the South continued to foment a distaste for military involvement in the civilian sphere.[47] The military presence was necessary to support the Reconstruction governments installed in the South ,[48]1[3] but the situation came to a head during the 1876 presidential election, which was determined by only one electoral vote. [49] In the election, Rutherford B. Hayes won with the disputed electoral votes of South Carolina, Louisiana, and Florida.[50] In those states, President Ulysses S. Grant had sent troops as a posse comitatus for federal marshals to use at the polls, if necessary. [51] This misuse of the military in an election--the most central event to a democracy--led Congress to enact the PCA in 1878.[52] III. THE SCOPE OF THE PCA In the nearly 120 years that the PCA has been in effect, there have been no criminal prosecutions under the Act, although it is a criminal statute. [53] This lack of criminal prosecutions has deprived courts of the opportunity to interpret the PCA directly. [54] Courts have had a few opportunities to interpret the statute indirectly, however.[55] Defendants have unsuccessfully raised the PCA as a shield, contending that a violation of the Act divests the trial court of jurisdiction[56] and that evidence gathered through a violation of the PCA should be suppressed.[57] The PCA has been successfully used where (1) the involvement of the military drew into question whether federal law enforcement officers were lawfully performing their duty[58] and (2) a PCA violation enabled the federal government to avoid liability under the Federal Tort Claims Act[59] because the tortious act in question "could not have been authorized on behalf of the United States by any action short of a Congressional enactment."[60] This dearth of judicial interpretation has left "the parameters of the [PCA] . . . substantially untested."[61] Due to the resulting lack of clarity, the PCA does not actually prohibit all so-called "exceptions" to its application, and such exceptions-in-name have been enacted to clarify--or, depending upon your view, alter--its boundaries[62] and to provide guidance to military commanders.[63] The greatest uncertainties regarding the PCA concern what constitutes "any part of the Army or Air Force"[64] and what actions "execute the laws."[65] A. Elements of the Armed Forces Covered by the PCA The PCA expressly applies only to the Army and Air Force.[66] Congress did not mention the Navy, Marine Corps, Coast Guard, or National Guard in the PCA; accordingly, the PCA does not limit them.[67] However, the Department of Defense has extended by regulation the PCA's prohibitions to the Navy and Marine Corps. [68] Although, the Coast Guard is part of the armed forces, in peacetime it falls under the authority of the Department of Transportation[69] and has an express law enforcement function.[70] Additionally, the PCA only applies to forces in federal service, and therefore, the National Guard is not limited by the PCA in its normal status of state service.[71] Because the National Guard is the modern militia, this distinction actually follows the intent of the PCA, which was not meant to limit militias.[72] The courts have also implicitly limited army to the official military establishment rather than its broader plain meaning.[73] The breadth of the PCA in its application to "any part of the Army or Air Force" is uncertain and can be a factual question.[74] The PCA applies to on-duty service members, but not to off-duty service members acting in a private capacity.[75] Conversely, when an off-duty service member acts under the direction of military authorities, the PCA applies.[76] Whether the PCA applies to civilian employees of the armed forces remains undecided. On the basis of general agency principles, the PCA should arguably apply to civilian employees during the performance of their duties,[77] but Department of Defense regulations do not apply PCA restrictions to them.[78] Furthermore, according to the Judge Advocate General, civilian employees of the Army are technically not part of the military.[79] B. What Action Constitutes a Violation of the PCA The PCA proscribes the use of the military[80] "as a posse comitatus or otherwise to execute the laws."[81] Courts have used three formulations of an active versus passive test to determine a PCA violation. All three formulations result from litigation that ensued following the 1973 standoff between federal authorities and the American Indian Movement at Wounded Knee, South Dakota.[82] The formulations allow passive assistance in support of law enforcement without causing a PCA violation.[83] In United States v. Red Feather ,[84] the court defined a PCA violation as "direct active use of Army or Air Force personnel,"[85] thus creating the passive versus active dichotomy. The court found that a provision of military equipment and supplies was not an active use of the military.[86] The court found support for this position in Congress's passage of the Economy Act of 1932,[87] which provides for the transfer of resources between executive departments. [88] In United States v. Jaramillo ,[89] the court focussed on whether use of the military "pervaded the activities" of the civilian law enforcement agencies to determine a PCA violation.[90] The court found that the provision of supplies and equipment alone did not constitute a violation,[91] and it concerned itself with whether the military observers involved had too much influence over civilian law enforcement decisions regarding negotiations, use of equipment, and the policy on the use of force.[92] Although the court did not necessarily find a violation of the PCA, the evidence cast doubt on whether the federal authorities were "lawfully engaged in the lawful performance of their official duties." [93] Therefore, the court dismissed the indictment for obstructing law enforcement officers.[94] United States v. McArthur ,[95] approved by the Eighth Circuit in United States v. Casper,[96] promulgated the third formulation of the active versus passive test and focussed on the individual subjected to the PCA violation.[97] The McArthur formulation asked whether "military personnel subjected . . . citizens to the exercise of military power which was regulatory, proscriptive, or compulsory in nature." [98] On basically the same facts as Jaramillo, the McArthur court found no PCA implications.[99] United States v. Yunis[100] further clarified the elements of the McArthur formulation: regulatory power "controls or directs";[101] proscriptive power "prohibits or condemns";[102] and compulsory power "exerts some coercive force." [103] It should be noted that the PCA's effect is limited to the United States and does not bar the military's support of law enforcement agencies abroad. In Chandler v. United States,[104] the court held that the PCA has no extraterritorial effect.[105] However, some restrictions do exist on the military's activities outside the United States. These restrictions arise out of military regulations[106] and congressional acts that have limited military support to foreign civilian law enforcement authorities.[107] C. Exceptions to the PCA The PCA explicitly recognizes constitutional and legislative exceptions to its application. [108] The existence of any constitutional exceptions was contested at the time of the PCA's enactment.[109] Some proponents of the PCA saw the exceptions as inherent in the executive powers of the President and in his position as Commander-in-Chief of the armed forces, [110] thus making them beyond the reach of Congress to limit.[111] Others who supported the PCA's passage recognized no such exceptions. [112] The existence of constitutional exceptions to the PCA may only actually lie in the "twilight zone" where the President may act where Congress has not, as described by Justice Jackson's concurrence in Youngstown Sheet & Tube Co. v. Sawyer. [113] Another "constitutional" exception to the PCA is described by the Department of Defense regulations based upon the "inherent right of the U.S. Government . . . to ensure the preservation of public order and to carry out governmental operations . . . by force, if necessary."[114] The Office of Legal Counsel of the Department of Justice has promulgated a similar view in recognition of the U.S. government's power to protect federal functions.[115] The power to protect federal functions has been so broadly interpreted, however, that if accepted it would become the exception that swallows the rule. Now-Chief Justice William Rehnquist interpreted this power to extend to any "uniquely federal responsibility" while he was an attorney in the Office of Legal Counsel.[116] However, this exception has yet to be tested in the courts and would likely be interpreted as narrowly as the other exceptions to the PCA. Congress itself has recognized several exceptions to the PCA, [117] which this Note categorizes as exceptions-in-fact and exceptions-in-name.[118] The exceptions-in-fact are true exceptions that exempt otherwise criminal actions under the PCA and alter its boundaries. Exceptions-in-name include exceptions that are described or perceived as exceptions to the PCA, but which authorize allowable acts under any of the court-created tests. [119] Exceptions-in-name do not alter the accepted boundaries of the PCA and do not make previously criminal acts legal. They are sometimes simply termed "clarifications."[120] Exceptions-in-name allow the military to provide equipment and supplies,[121] technical assistance,[122] information,[123] and training to law enforcement agencies.[124] Such provisions constitute passive assistance to civilian law enforcement, which does not subject any civilian to the regulatory, proscriptive, or coercive power of the military. [125] Exceptions-in-fact include protection of the rights of a discoverer of a guano island,[126] removal of persons illegally occupying Indian lands,[127] protection of national parks, [128] investigation of crimes against the President or others in the line of succession,[129] and protection of civil rights where local authorities do not or cannot protect them.[130] Exceptions-in-fact also include the quelling of civil disturbances and labor strife that rises to the level of civil disorder. For example, Troops were used to put down the Whiskey Rebellion[131] long before the PCA was passed and to maintain order during school desegregation in the South after the Act's passage. Troops have also been used to quell riots in Detroit and other cities.[132] More recently, they were deployed on the streets of Los Angeles in 1992 after the Rodney King verdict.[133] The courts have recognized another type of exception through the military purpose doctrine, which is not explicitly mentioned in the PCA.[134] The doctrine allows the military to enforce civilian laws on military installations, to police themselves, and to perform their military functions even if there is an incidental benefit to civilian law enforcement.[135] However, the doctrine is interpreted by the McArthur formulation--whether a person is subjected to military power that is regulatory, proscriptive, or coercive[136]--when the activities occur off-base.[137] D. Allowable Domestic Uses of the Military There are many other uses of the military which seem to implicate the PCA, but are not within its scope because law is not being enforced. Since the passage of the PCA, the military has been used several times for domestic purposes that do not conform to its traditional role. The PCA proscribes use of the army in civilian law enforcement, but it has not prevented military assistance in what have been deemed national emergencies, such as strike replacements and disaster relief. However, these emergencies differ in character from other exceptions to the PCA by their very nature as emergencies and by the duration of the military involvement. Presidents Richard Nixon and Ronald Reagan both used the military to replace striking federal employees. In 1970, President Nixon sent 30,000 federal troops to replace striking postal workers in New York,[138] and in 1981, President Reagan replaced striking air traffic controllers. [139] The military has also been used to replace striking coal miners.[140] Disaster relief, another common use of the military, does not seem to violate the PCA because it is not a mission executing the laws. In the 1906 San Francisco earthquake, the Army led the effort to put out fires and restore order. More recently, Hurricane Hugo in Florida resulted in a large military presence during the relief effort.[141] However, the military also found itself providing election facilities in Florida--a situation too similar to that which precipitated the passage of the PCA in 1876.[142] IV. EXCEPTIONS TO THE PCA ENDANGER THE MILITARY AND THE UNITED STATES The PCA's exceptions-in-name and exceptions-in-fact endanger the military and the United States by blurring the traditional line between military and civilian roles, undermining civilian control of the military, damaging military readiness, and providing the wrong tool for the job.[143] Besides the current drug interdiction exceptions, the 104th Congress considered two bills to create new exceptions to the PCA. [144] The Border Integrity Act[145] would have created an exception to allow direct military enforcement of immigration and customs laws in border areas. [146] The Comprehensive Antiterrorism Act[147] would have allowed military involvement in investigations of chemical and biological weapons. [148] This Note will discuss these two proposed exceptions together with the exception mandating military involvement in counter-drug operations to illustrate the negative effects of creating exceptions to the PCA. Increasing direct military involvement in law enforcement through border policing--an exception-in-fact[149]--is an easy case against which to argue. Investigative support--an exception-in-name[150]--is passive, indirect enforcement. Drug interdiction--an exception-in-name for the most part--falls between border policing and investigative support because of the extensive military involvement. A. Blurring the Lines The differences in the role of civil law enforcement and the role of the military are blurred by the PCA's exceptions. Civilian law enforcement is traditionally local in character, responding to needs at the city, county, or state level. Civilian law enforcement trains for the law enforcement mission, which differs from the military mission.[151] Civilian law enforcement requires the cognizance of individual rights and seeks to protect those rights, even if the person being protected is a bad actor. Prior to the use of force, police officers attempt to de-escalate a situation. Police officers are trained to use lesser forms of force when possible to draw their weapons only when they are prepared to fire. On the other hand, soldiers are trained when to use or not to use deadly force. [152] Escalation is the rule. The military exists to carry out the external mission of defending the nation. Thus, in an encounter with a person identified with the enemy, soldiers need not be cognizant of individual rights, and the use of deadly force is authorized without any aggressive or bad act by that person.[153] This difference between soldiers and police has been tragically illustrated in the recent shooting of a young man by marines patrolling near the Mexican border.[154] The exceptions of border duty, investigative support, and drug interdiction blur the traditional line between civilian law enforcement and the role of the military. Border duty by soldiers under the Border Integrity Act has traditionally been the responsibility of civilian law enforcement. Drug interdiction has traditionally been a task for civilian law enforcement, and long-term military involvement comes close to subjecting civilians to all three types of military power--a fear of the Founding Fathers.[155] Investigative support by the military is very reminiscent of the military surveillance conducted in the 1960s, which was condemned by Congress and members of the Supreme Court as an improper use of the military.[156] B. Undermining Civilian Control of the Military Civilian control of the military is undermined whenever military activities invade areas that "endanger liberties or the democratic process, even when that expansion is sanctioned by the civilian leadership."[157] The military should not gain "unwarranted influence" in civilian affairs.[158] The purpose of civilian control is "to ensure that defense policy and the agencies of defense policy are subordinated to other national traditions, values, customs, governmental policies, and economic and social institutions."[159] The civilian government must therefore consider the institutional characteristics of the military, including personnel, doctrine, training, equipment, and morale, when making policy decisions about the domestic use of the military.[160] A military with many nonmilitary functions is more "autonomous" and thus under less civilian control.[161] In the case of counter-drug activities, the government has disregarded all these considerations. The counter-drug mission is not a good fit for the military: the chronic nature of the drug problem requires the military's deep involvement over time without any true success[162] because the high profitability of drug trafficking makes its complete deterrence impossible.[163] This involvement without success hurts morale,[164] and the long-term nature of the involvement cannot help but increase the "unwarranted influence" of the military in civilian affairs.[165] Both border duty and investigative support, if enacted, would create the same concerns as the counter-drug mission. Increasing the involvement of the military in civilian law enforcement will make it difficult to maintain the military's subordinate role over the long-term. Additionally, use of the military in civilian law enforcement damages its professionalism, which the PCA's enactment helped to develop. Many of these same concerns underlay the government's reluctance to send the military abroad without clear criteria and timelines for withdrawal,[166] yet those concerns have been ignored in domestic military use. C. Damaging Military Readiness The military's primary mission is national security, and the wisdom of all military decisions is ultimately weighed against whether national security is enhanced or damaged. Military readiness is a key to modern warfare and to the maintenance of national security.[167] In recognition of this fact, the military can refuse a request for aid in drug interdiction and in the investigation of chemical and biological weapons if military readiness might be compromised.[168] However, this power of refusal does not prevent injury to military readiness,[169] because while the military still takes on these missions, their mere consideration injures readiness through the redirection of resources in the decisionmaking process by adding a nonmilitary factor to the decision.[170] The border duty, investigative support, and drug interdiction exceptions are double-edged swords with respect to military readiness. The military has embraced new missions like drug interdiction as a way to preserve force structure and budget levels and to improve public relations.[171] In this respect, these new missions may aid readiness by preserving support for military strength and funding, but this benefit is outweighed by the shift of focus slightly away from the mission to fight a war.[172] This change of focus lessens the fighting edge of the military[173] and dampens the "warrior spirit." [174] Additionally, these missions require equipment modifications and the reallocation of resources.[175] For example, F-15 pilots do not hone their dogfighting skills by tracking a single-engine Cessna flying north from Mexico; in the Gulf War, there were stories of inadequately trained National Guard units that had participated more frequently in nontraditional missions, yet were incapable of fulfilling their military mission.[176] The three exceptions to the PCA affect military readiness in a variety of ways. Drug interdiction has injured military readiness as a result of expensive equipment modifications and the redirection of resources. The 1993 Department of Defense budget included more than $1.4 billion for drug interdiction missions.[177] This budget allocation has resulted in a "drug command" of sorts which is entirely focussed on the domestic mission of drug interdiction.[178] Border duty requires a different mindset and a different level of restraint than warfare,[179] thus disrupting the optimum culture and mindset needed to maintain national security. Investigatory support by the military is also a mission differing from that which currently exists in the military.[180] To redirect resources or to consider performing such nonmilitary missions involves considerations that lessen the importantance of strictly improving military readiness, even when the only question is where to train. D. Wrong Tool for the Job Illegal immigration, drug interdiction, and investigative support relating to terrorism are all long-term problems requiring long-term solutions. These problems are not easily resolved, however, and no foreseeable end to the military's involvement appears forthcoming.[181] Because of the significance of the problems and their continuing and chronic nature, using the military to combat these problems is like using a sledge hammer to open a locked trunk when all one needs is the key. It is better to fashion a key than to destroy the trunk. All three exceptions to the PCA require using the wrong tool for the job. For example, border duty forces the military to alter its mindset and training. The border patrol and other law enforcement agencies already have the proper mindset and qualifications and are better able to do the job. Using an F-15 to track drug smugglers' slow planes is both excessive and expensive. A basic military soldier costs the government $82,000 a year in training and upkeep. A soldier's involvement in drug interdiction is much more expensive than a civilian counterpart's participation. Investigatory support for weapons of mass destruction to counter terrorism is more than a minor exception because terrorism is a continuing problem without end. We would best be served by developing these resources in civilian law enforcement. [182] V. RENEWAL OF THE POLICY EMBODIED BY THE PCA The fundamental precept of maintaining the separation between the military and civilian spheres of action[183] must be renewed, not eroded by exceptions. Both exceptions-in-name and exceptions-in-fact should be avoided because they injure that separation.[184] To maintain the principle that animates the PCA, the PCA should be reaffirmed and strengthened. Below are three possible approaches. One approach is to do nothing, but to do nothing would only leave the situation in its current unacceptable state. The military is seen as a panacea to many domestic problems that do not properly fall within the military sphere. Congress may resolve to leave the PCA alone, but it should be remembered that in 1878 the PCA was enacted precisely because the government had to be reminded of the fundamental principle of separating the military from the civilian sphere.[185] The need to remind, or re-remind, government of that fundamental principle exists today. Another approach is to amend the Constitution, but this is less appealing than the first approach. A constitutional amendment by its very nature would strengthen the principle of excluding the military from the execution of civilian laws, but it is inflexible. In this case, a constitutional amendment would limit the powers of the President by limiting his authority as Commander-in-Chief and executor of the laws.[186] If these powers are qualified by Congress, as some commentators suggest,[187] then the amendment would only add inflexibility to the Constitution, thereby weakening one of its greatest assets.[188] Normally, the "twilight zone"[189] of Presidential power provides enough flexibility to allow the United States to best meet the uncertainties of the future. A constitutional amendment would remove the "twilight zone" with respect to military use. It might be possible to amend the Constitution to allow nonmilitary use of the military through the exercise of emergency powers, but the resulting amendment would still create a fixed standard that might not foresee some future event, making it unacceptable.[190] Additionally, a constitutional amendment should not be enacted unless absolutely necessary for the functioning of our government or society.[191] The third and best approach, is a legislative reaffirmation of the fundamental principle behind the PCA with added guidelines to help focus considerations of PCA exceptions. Legislative action refocusses the debate on the use of the military from the pressing problems into which they may possibly be drawn and retains the flexibility that a constitutional amendment would remove. Additionally, the legislative solution maintains flexibility yet Congress and the President remain constrained by public opinion in their use of the military.[192] This Note proposes that Congress repeal the PCA in Title 18 and enact the following statute in Title 10: (a) Any part of the armed forces,[193] excluding the Coast Guard, is prohibited from acting as a posse comitatus or otherwise to execute the laws, except in cases and under circumstances expressly authorized by the Constitution or Act of Congress.[194] (b) Exceptions to paragraph (a) allowing use of the armed forces must meet the following criteria: (1) the use must be triggered by an emergency, which is defined as any occasion or instance for which Federal assistance is needed to supplement State and local efforts and capabilities to save lives and to protect property and public health and safety, or to lessen or avert the threat of a catastrophe[195]--generally a sudden, unexpected event;[196] (2) the use must be beyond the capabilities of civilian authorities; and (3) the use must be one limited in duration and not one which addresses a chronic, continuing issue or problem. (c) Clarifications to prohibitions in subsection (a) are to be made by regulations to be published in the Federal Register and printed in the Code of Federal Regulations. (d) This section is an affirmation of the fundamental precept of the United States of separating the military and civilian spheres of authority. (e) Nothing in this section shall be construed to affect the law enforcement functions of the United States Coast Guard. First, a repeal of the PCA in Title 18, Crimes and Criminal Procedure, and recodification into Title 10, Armed Forces, would bring the law into line with its current function and force. The PCA is a law of policy, both in application and political discourse. The total lack of prosecutions under the PCA[197] causes the law to lack force and credibility, because the crime is going unpunished. Recodification recognizes the law as a limitation on the use of the military without losing credibility due to the lack of enforcement. Additionally, other criminal statutes would cover misappropriation of military services,[198] or Congress could modify them to cover activity prohibited by the PCA. The 94th Congress considered such a recodification. [199] The general language in subsection (a) of the proposed law would be substantially similar to the current wording of the PCA.[200] This language leaves some of the ambiguity and vagueness in the law, which in turn leaves intact the flexibility of the current law and clearly marks it as a continuation of the PCA. Past interpretations of the PCA would therefore apply equally to the proposed law as they do to the current law. However, one significant change in the language extends the principle of the PCA to the Navy and Marine Corps by reference to "the armed forces." This extension broadens the language of the PCA[201] and makes the current policy--as it is evinced by Department of Defense regulations--law, thereby erasing a meaningless distinction.[202] By codifying the DoD regulations, any change would require congressional approval.[203] The Coast Guard is explicitly excluded from the proposal in both subsections (a) and (d) to leave their current law enforcement responsibilities intact. The criteria for exceptions creates a structure for considering what exceptions to the PCA are proper. In subsection (b), the proposed basic criteria for exceptions is an emergency of limited duration and a nonchronic nature that is beyond the capabilities of civilian authorities.[204] The purpose of such criteria is to keep the military from getting drawn into a substantial, long-term, and distracting role, such as involvement in drug interdiction activities. The emergency requirement recognizes the importance of the separation of military and civilian spheres by stating that anything less will not involve the military. An emergency also suggests something more than an ordinary occurrence, which further ties in the notion of a nonchronic problem requiring a limited time commitment. The military should be used as a stop-gap, not as a permanent or regular solution to a problem.[205] Requiring the problem to be beyond the capabilities of civilian authorities forces the military to stay out of matters that can otherwise be handled by the proper authorities[206] and will encourage the development of those authorities' capabilities to deal with chronic, nonemergency problems. The framework of this proposal would allow exceptions for civil disturbances, insurrection, strike replacement, and disaster relief, because all are limited in scope, require resources usually beyond local authorities, and would by nature be emergencies. The use of troops for border duty would fail under all three requirements because no emergency is occurring, involvement is not of limited duration, and customs and immigration problems are not beyond the capabilities of immigration authorities. The counter-drug exception also fails under all three criteria, particularly because it involves no emergency and is not of limited duration. The investigative support exception for weapons of mass destruction fails under the limited-duration and capabilities-of-civilian authorities criteria, but like the PCA, the proposal does not prohibit investigatory support. The requirement of regulations to clarify the law lessens the need for exceptions-in-name while providing military commanders with guidance. One of the reasons Congress passed the counter-drug exception was that the military commanders lacked guidance. The inclusion of clarifying regulations in the Code of Federal Regulations delineates the acceptable uses of the military and promotes public discourse about the appropriateness of these uses. VI. CONCLUSION The Departments of Justice and Defense got it right as recently as 1979: The [PCA] expresses one of the clearest political traditions in Anglo-American history: that using military power to enforce the civilian law is harmful to both civilian and military interests. The authors of the [PCA] drew upon a melancholy history of military rule for evidence that even the best intentioned use of the Armed Forces to govern the civil population may lead to unfortunate consequences. They knew, moreover, that military involvement in civilian affairs consumed resources needed for national defense and drew the Armed Forces into political and legal quarrels that could only harm their ability to defend the country. Accordingly, they intended that the Armed Forces be used in law enforcement only in those serious cases to which the ordinary processes of civilian law were incapable of responding.[207] The need to fight "the war" on drugs, to combat terrorism, and to deter illegal immigration are long-term problems that are currently high on the public agenda and will not go away without long-term solutions. Tight budgets and the desire for a quick-fix do not create an emergency justifying the conversion of martial rhetoric to reality. Relegating these problems to a military solution poses dangers to our individual rights and to the history and underlying structure of the United States that should not be ignored. Resources must be made available to create viable civilian law enforcement responses to these problems. If these resources must be redirected from the military, then Congress should do so. Declare "war," but let it be fought by civilian law enforcement with the right weapons for the job. The military should be the last resort, not the first solution. In the long run, the "war" will be more effectively fought with dedicated "soldiers" with an undivided focus.[208] _________________________________________________________________ [1.] Army Appropriations Act, ch. 263, ? 15, 20 Stat. 145, 152 (1878) (codified as amended at 18 U.S.C. ? 1385 (1994)). [2.] See 18 U.S.C. ? 1385 (1994). [3.] See generally Jim McGee, Military Seeks Balance in Delicate Mission: The Drug War, WASH. POST, Nov. 29, 1996, at A1. The military has become "embedded" in the drug war and performing domestic police missions traditionally belonging to civilian law enforcement. Id. [4.] Charles J. Dunlap, Jr., Welcome to the Junta: The Erosion of Civilian Control of the U.S. Military, 29 WAKE FOREST L. REV. 341, 342 (1994); see also McGee, supra note 3; Editorial, A Hasty Response to Terrorism, N.Y. TIMES, June 9, 1995, at A28. [5.] On April 19, 1995, a fertilizer bomb in a parked truck destroyed the federal office building in Oklahoma City, Oklahoma. David Johnston, Terror in Oklahoma City: The Investigation, at Least 31 Are Dead, Scores Are Missing After Car Bomb Attack in Oklahoma City Wrecks 9-Story Federal Office Building, N.Y. TIMES, Apr. 20, 1995, at A1, B8. At least 165 people were killed. See Terror in Oklahoma: The Victims; 165 People Who Were Killed in the Oklahoma City Explosion, N.Y. TIMES, May 7, 1995, at 36 (list of those killed). [6.] Todd S. Purdum, Terror in Oklahoma: The Overview, Clinton Seeks More Anti-Terrorism Measures, N.Y. TIMES, Apr. 27, 1995, at A1, A21. "Weapons of mass destruction . . . are generally considered to be nuclear or massive chemical or biological weapons." Id. The exception to the PCA would have been enacted in the Counterterrorism Act of 1995, S. 735, 104th Cong., 1st Sess. ? 908 (June 5, 1995) (version 4) (the House version was H.R. 1710). The House of Representatives later deleted this provision from their version of the bill to gain support from conservative Republicans and salvage the legislation. Terrorism Bill Plan May Break Deadlock, N.Y. TIMES, Dec. 2, 1995, at 8. An exception for nuclear materials is already law. See 18 U.S.C. ? 831 (1994) (authorizing the Attorney General to request assistance from the Department of Defense in enforcing prohibitions against transactions involving nuclear materials). [7.] See Border Integrity Act of 1995, H.R. 1224, 104th Cong., 1st Sess. [8.] See Otto Kreisher, Alexander's Ideas Hard to Pin Down; Military, Welfare Experts Call Plans Lousy, Unworkable, SAN DIEGO UNION-TRIB., Mar. 1, 1996, at A6, available in 1996 WL 2145186. [9.] See Department of Defense Authorization Act, 1982, Pub. L. No. 97-86, ? 905, 95 Stat. 1099, 1114-16 (1981) (codified as amended at 10 U.S.C. ? 371-380 (1994)). [10.] National Defense Authorization Act for Fiscal Years 1990 and 1991, Pub. L. No. 100-189, ? 1202, 103 Stat. 1353, 1563 (1989) (codified as amended at 10 U.S.C. ? 124(a) (1994)). The statute states as follows: Lead Agency.-- 1. The Department of Defense shall serve as the single lead agency of the Federal Government for the detection and monitoring of aerial and maritime transit of illegal drugs into the United States. 2. The responsibility conferred by paragraph (1) shall be carried out in support of the counter-drug activities of Federal, State, local, and foreign law enforcement agencies. 10 U.S.C. ? 124(a) (1994); see also McGee, supra note 3, at A1 (since 1989, the military has spent over seven billion dollars on counter-drug efforts). [11.] Posse comitatus is defined as follows: "The power or force of the county. The entire population of a county above the age of fifteen, which a sheriff may summon to his assistance in certain cases, as to aid him in keeping the peace, in pursuing and arresting felons, etc." BLACK'S LAW DICTIONARY 1162 (6th ed. 1990). The definition is your basic movie western posse. In 1854, the Attorney General interpreted posse comitatus to include the military. See infra notes 44-45 and accompanying text. In Norman England, the posse comitatus also had a military character and could be called out to defend the kingdom against insurrection and invasion. Walter E. Lorence, The Constitutionality of the Posse Comitatus Act, 8 U. KAN. CITY L. REV. 164, 166-67 (1939-40). [12.] 18 U.S.C. ? 1385 (1994). Currently, the fine for individuals is up to $250,000. See 18 U.S.C. ? 3571(b) (1994). [13.] Posse Comitatus Act: Hearing Before the Subcomm. on Crime of the Comm. on the Judiciary on H.R. 3519, 97th Cong., 1st Sess. 10-11 (1981) [hereinafter PCA Hearing] (statement of Edward S.G. Dennis, Jr., Chief, Narcotics and Dangerous Drug Sec., Crim. Div., U.S. Dep't of Justice). [14.] See infra Part IV. [15.] See, e.g., Stephen Labaton, Bill on Terrorism, Once a Certainty, Derails in House, N.Y. TIMES, Oct. 3, 1995, at A1; The F.B.I. Overreaches, N.Y. TIMES, May 10, 1995, at A22; see also James Bennett, Two States, Two Gatherings and a Lot of Anti-Government Sentiment, N.Y. TIMES, Oct. 3, 1995, at A1. [16.] See Laird v. Tatum, 408 U.S. 1, 3-8 (1972) (discussing the Army's domestic surveillance system in the late 1960s). The plaintiffs sued to stop the Army from compiling files on civilians as part of its support of federal law enforcement. Id. at 2. The Supreme Court dismissed the lawsuit for lack of standing. Id. at 12-15; see also Letter from former Senator Sam J. Ervin, Jr. to Rep. William J. Hughes, Chairman, Subcomm. on Crime, Comm. on the Judiciary, House of Representatives (June 2, 1981) [hereinafter Ervin Letter] (Sen. Ervin chaired the committee that investigated the military's spying on civilians in 1967 and 1968), in PCA Hearing, supra note 13, at 86. [17.] Border duty is a direct use of the military to execute civilian laws. Use of the military for investigative support is on the opposite end of the spectrum--a passive, indirect execution of civilian laws with only minor involvement foreseen. Drug interdiction falls between the two: it is passive, but the involvement is extensive. In 1993, the Department of Defense had $1.4 billion in its annual budget to finance drug interdiction. Charles J. Dunlap, Jr., The Last American Warrior: Non-Traditional Missions and the Decline of the U.S. Armed Forces, FLETCHER F. WORLD AFF., Winter/Spring 1994, at 65, 69. For a description of current activities of the military in counter-drug activities, see McGee, supra note 3. [18.] See infra note 49-51 and accompanying text. [19.] See supra note 13 and infra notes 20-43 and accompanying text. "[M]emories of the arrogance of the British Army" fueled the early controversy over the military. M.E. Bowman, The Military Role in Meeting the Threat of Domestic Terrorism, 39 NAVAL L. REV. 209, 211 (1990). [20.] The Declaration of Independence eloquently expressed this mistrust: He has erected a multitude of New Offices, and sent hither swarms of Officers, to harass our People, and eat out their substance. He has kept among us, in times of peace, Standing Armies without the Consent of our Legislature. He has affected to render the Military independent of and superior to the Civil Power. He has combined with others to subject us to a jurisdiction foreign to our constitution, and unacknowledged by our laws; giving his Assent to their acts of pretended Legislation: For quartering large bodies of armed troops among us: For protecting them, by a mock Trial, from Punishment for any Murders which they should commit on the Inhabitants of these States: . . . . He has abdicated Government here, by declaring us out of his Protection and waging War against us. He has plundered our seas, ravaged our Coasts, burnt our towns, and destroyed the lives of our people. He is at this time transporting large armies of foreign mercenaries to compleat the works of death, desolation and tyranny, already begun with circumstances of Cruelty & perfidy scarcely paralleled in the most barbarous ages, and totally unworthy the Head of a civilized nation. He has constrained our fellow Citizens taken Captive on the high Seas to bear Arms against their Country, to become the executioners of their friends and Brethren, or to fall themselves by their Hands. THE DECLARATION OF INDEPENDENCE paras. 12-19 (U.S. 1776). [21.] See PCA Hearing, supra note 13, at 15 (statement of William Howard Taft IV, Gen. Counsel, U.S. Dep't of Defense); ANDREW J. GOODPASTER & SAMUEL P. HUNTINGTON, CIVIL-MILITARY RELATIONS 9-11 (1977); cf. Letter from Larry L. Simms, Deputy Asst. Att'y Gen., Office of Legal Counsel, U.S. Dep't of Justice, to Rep. L.A. "Skip" Befalis 4-5 (Aug. 6, 1979) (noting that because "the primary mission of the Navy is to be the instrument of seapower in the national defense," it should not become involved in interdiction efforts despite the lack of a PCA bar to such involvement), reprinted in PCA Hearing, supra note 13, at 523, 526-27. [22.] See generally U.S. CONST.; ARTICLES OF CONFEDERATION, 1 Stat. 4 (U.S. 1778) (superseded by U.S. CONST.); THE DECLARATION OF INDEPENDENCE (U.S. 1776). [23.] THE DECLARATION OF INDEPENDENCE para. 15 (U.S. 1776). [24.] Id. para. 13 ("He has kept among us, in times of peace, Standing Armies without the Consent of our Legislature."). [25.] Id. para. 14 ("He has affected to render the Military independent of and superior to the Civil Power."). [26.] Id. para. 15 ("For quartering large bodies of armed troops among us: For protecting them, by mock Trial, from Punishment for any Murders which they sould commit . . . ."). [27.] ARTICLES OF CONFEDERATION arts. 6, ?? 4,5, 1 Stat. 4, 4-5 (U.S. 1778); id. art. 7, 1 Stat. at 5. [28.] Id. art. 6, ?? 4-5, 1 Stat. at 4-5. No vessels of war shall be kept up in time of peace by any State, except such number only, as shall be deemed necessary by the United States in Congress assembled, for the defence of such State, or its trade; nor shall any body of forces be kept up by any State, in time of peace, except such number only, as in the judgment of the United States, in Congress assembled, shall be deemed requisite to garrison the forts necessary for the defence of such State; but every State shall always keep up a well regulated and disciplined militia, sufficiently armed and accoutered, and shall provide and constantly have ready for use, in public stores, a due number of field pieces and tents, and a proper quantity of arms, ammunition and camp equipage. No State shall engage in any war without the consent of the United States in Congress assembled, unless such State be actually invaded by enemies . . . : nor shall any State grant commissions to any ships or vessels of war, nor letters of marque or reprisal, . . . unless such State be infested by pirates, in which case vessels of war may be fitted out for that occasion, and kept so long as the danger shall continue, or until the United States in Congress assembled shall determine otherwise. Id. [29.] Id. art. 7, 1 Stat. at 5. When land-forces are raised by any State for the common defence, all officers of or under the rank of colonel, shall be appointed by the Legislature of each State respectively by whom such forces shall be raised, or in such manner as such State shall direct, and all vacancies shall be filled up by the State which first made the appointment. Id. [30.] John A. Hardaway, Colonial and Revolutionary War Origins of American Military Policy, MIL. REV., Mar. 1976, at 77, 81. [31.] J. Bryan Echols, Open Houses Revisited: An Alternative Approach, 129 MIL. L. REV. 185, 200 (1990). [32.] U.S. CONST. art. 1, ? 8, cls. 12-13. [33.] Id. art. 1, ? 8, cl. 12 ("The Congress shall have Power . . . To raise and support Armies, but no Appropriation of Money to that Use shall be for a longer Term than two Years . . . ."). However, appropriations for the Navy are not similarly limited. See id. cl. 13 ("To provide and maintain a Navy . . . ."). The failure to limit navy appropriations follows from the failure to mention the navy as an evil in the Declaration of Independence. See supra note 20. Indeed, navies were seen as instruments of the great powers and were not thought to be a threat to civil supremacy. [34.] U.S. CONST. art. 2, ? 2 ("The President shall be Commander in Chief of the Army and Navy of the United States, and of the Militia of the several States, when called into the actual Service of the United States . . . ."). [35.] Id. art. 1, ? 8, cl. 14 ("To make Rules for the Government and Regulation of the land and naval forces."). The Constitution preserves the right of states to appoint officers for their militias, but limits the states' authority by requiring militia training to conform with Congress's requirements. Id. cl. 16; cf. THE ARTICLES OF CONFEDERATION art. 7 (U.S. 1778) (as discussed supra notes 27-30 and accompanying text). [36.] U.S. CONST. amend. III ("No Soldier shall, in time of peace be quartered in any house, without the consent of the Owner, nor in time of war, but in a manner to be prescribed by law."). For a discussion of the history and purpose of the Third Amendment, see William Sutton Fields, The Third Amendment: Constitutional Protection from the Involuntary Quartering of Soldiers, 124 MIL. L. REV. 195 (1989). [37.] U.S. CONST. amend. II ("A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed."). [38.] See Peter M. Sanchez, The "Drug War": The U.S. Military and National Security, 34 A.F. L. REV. 109, 117 (1991) ("Despite [Alexander] Hamilton's assurances [in response to public fears regarding a national army], the Constitution was only ratified after Federalists agreed to incorporate the Bill of Rights." (quoting W. PETERS, A MORE PERFECT UNION 231-37 (1987))). [39.] U.S. CONST. amend. I. [40.] Id. amend. IV. [41.] See Sanchez, supra note 38, at 118 (mentioning the belief that the Bill of Rights "would effectively preclude the new government from usurping the rights of citizens, whether by military or other means"). [42.] See THE FEDERALIST NO. 41, at 258-60 (James Madison) (Clinton Rossiter ed., 1961); see also Dunlap, supra note 4, at 388 ("[T]he military is the antithesis of democracy." (footnote omitted)); Hardaway, supra note 30, at 80 (Colonial feeling was that a standing army "was simply incompatible with a democratic system of government."). "The ideals of liberty, democracy, equality, and peace have contrasted with the military's concern with authority, hierarchy, obedience, force, and war." GOODPASTER & HUNTINGTON, supra note 21, at 7. "It ill behooves a democracy to become over-fond of its soldiery," said one Founding Father. DeWitt C. Smith, Jr., From Yesterday's Fears to Today's Realities, PARAMETERS, Fall 1977, at 90, 90. The Founding Fathers' fear of a standing army is also evident in the constitutions of the thirteen original states, which note that standing armies are "dangerous." See, e.g., N.H. CONST. pt. 1, art. 25 ("Standing armies are dangerous to liberty, and ought not to be raised, or kept up, without consent of the legislature."); VA. CONST. art. 1, ? 13 ("[S]tanding armies, in time of peace, should be avoided as dangerous to liberty; and . . . in all cases the military should be under strict subordination to, and governed by, the civil power."); see also H.W.C. Furman, Restrictions upon Use of the Army Imposed by the Posse Comitatus Act, 27 MIL. L. REV. 85, 92 n.41 (1960). [43.] See Smith, supra note 42, at 90. [44.] See Act of Sept. 18, 1850, ch. 60, 9 Stat. 462, 462-63. See supra note for the definition of posse comitatus. [45.] Extradition of Fugitives from Service, 6 Op. Att'y Gen. 466, 473 (1854). [46.] Furman, supra note 42, at 93. [47.] See Lorence, supra note 11, at 169. [48.] See id. The military support of these governments was described as a "rotten-borough or carpet-bag system." G. NORMAN LIEBER, U.S. WAR DEP'T, OFFICE OF THE JUDGE ADVOCATE GEN., DOC. NO. 64, THE USE OF THE ARMY IN AID OF THE CIVIL POWER 10 (1898) (quoting Rep. J.D.C. Atkins). In the Reconstruction South a monumental task faced the military: Unplanned and aimed not at eradicating states but at hurrying their return to the Union . . . , [the Military Reconstruction Laws] one way or another imposed on the Army the duties of initiating and implementing state-making on the basis of biracial citizen participation. Protecting the personnel of the federal courts and Freedman's Bureau, shielding blacks and whites who collaborated in the new order of equality under state law from retaliations by indignant vigilante neighbors, and monitoring the quality of daily marketplace justice in ten thousand villages--these were tasks that West Point had not prepared Army officers to perform. Harold M. Hyman, Ulysses Grant I, Emperor of America?: Some Civil-Military Continuities and Strains of the Civil War and Reconstruction, in THE UNITED STATES MILITARY UNDER THE CONSTITUTION OF THE UNITED STATES, 1789-1989, at 175, 186 (Richard H. Kohn ed., 1991) [hereinafter U.S. MILITARY UNDER THE CONSTITUTION]. The size of the standing army increased in post Civil War America. Emory Upton, The Military Policy of the United States (1881) (unpublished manuscript) (see tables and charts), reprinted in S. DOC. NO. 379, 1st Sess. (1916). With its increased size and permanence, the army began transforming itself into a professional army. See id. With a standing army of significant size, the PCA became even more important as a limit on the role of the military in society and as a shield from improper requests by civilian authorities. See PCA Hearing, supra note , at 38-39 (statement of Christopher H. Pyle, Professor, Mount Holyoke College). [49.] Furman, supra note 42, at 94; Lorence, supra note 11, at 173-74. [50.] See Lorence, supra note 11, at 172-74. [51.] Furman, supra note 42, at 94-95, 94 & nn.56-57; Lorence, supra note 11, at 172. [52.] Furman, supra note 42, at 94-96; see also LIEBER, supra note 48, at 10-12; Lorence, supra note 11, at 174-79; cf. Federal Document Clearing House, Buyer Offers Amendment to Anti-terrorism Legislation; Limits and Clarifies Role of Military, Gov't Press Release, June 14, 1995, available in 1995 WL 14249788 (suggesting that the PCA was also passed to prevent the use of troops to quell labor disputes). Grant's improper actions finally pushed the PCA through after several prior defeats. The original text of the PCA read as follows: From and after the passage of this act it shall not be lawful to employ any part of the Army of the United States, as a posse comitatus, or otherwise, for the purpose of executing the laws, except in such cases and under such circumstances as such employment of said force may be expressly authorized by the Constitution or by act of Congress; and no money appropriated by this act shall be used to pay any of the expenses incurred in the employment of any troops in violation of this section and any person wilfully violating the provisions of this section shall be deemed guilty of a misdemeanor and on conviction thereof shall be punished by fine not exceeding ten thousand dollars or imprisonment not exceeding two years or by both such fine and imprisonment. Army Appropriations Act, ch. 263, ? 15, 20 Stat. 145, 152 (1878). Congress adopted the modern text of the PCA, see supra note 12, in 1956. See An Act to Codify Title 10 and Title 32, ch. 1041, ? 18(a), 70A Stat. 1, 626 (1956) (codified as amended at 18 U.S.C. ? 1385 (1994)). The modern text was meant "to restate, without substantive change, the law replaced." Id. ? 49(a), 70A Stat. at 640. In 1866, prior to the passage of the PCA, the Supreme Court cast doubt on the use of the military to enforce the laws in place of civilian authorities, even in a time of stress. See Ex parte Milligan, 71 U.S. (4 Wall.) 2 (1866). [53.] CHARLES DOYLE, CONG. RES. SERV., NO. 88-583A, USE OF THE MILITARY TO ENFORCE CIVILIAN LAW: POSSE COMITATUS ACT AND OTHER CONSIDERATIONS 2 (1988); Furman, supra note 42, at 86; Interview with Brig. Gen. Walter B. Huffman, Asst. Judge Advocate Gen. for Military Law and Operations, U.S. Army, in Burke, Va. (Dec. 31, 1995) [hereinafter Huffman Interview]. In 1879, two Army officers were indicted in Texas for violating the PCA after providing a U.S. marshall with troops to enforce the revenue laws. LIEBER, supra note 48, at 28 n.1. Other than this one mention, there is no record that the officers were ever prosecuted. [54.] See PCA Hearing, supra note, at 10 (statement of Edward S.G. Dennis, Jr., Chief, Narcotics and Dangerous Drug Sec., Crim. Div., U.S. Dep't of Justice); Sanchez, ] supra note 38, at 120. The lack of opportunity for judicial interpretation will continue because military enforcement of the PCA is seldom challenged in the courts. Id. [55.] See infra Part III.B. [56.] See, e.g., Chandler v. United States, 171 F.2d 921, 935-36 (1st Cir. 1948), cert. denied, 336 U.S. 918 (1949) (finding jurisdiction where the Army arrested the defendant in Germany and returned him to the United States); Ex parte Mason, 256 F. 384, 385-87 (C.C.S.D.N.Y. 1882) (finding jurisdiction for military court in a court martial proceeding against a soldier for an attempted murder while on guard duty at a civilian jail despite finding a PCA violation). [57.] See United States v. McArthur, 419 F. Supp. 186 (D.N.D. 1975), aff'd sub. nom., United States v. Red Feather, 541 F.2d 1275 (8th Cir. 1976), cert. denied sub nom., Casper v. United States, 430 U.S. 970 (1977); United States v. Red Feather, 392 F. Supp. 916 (D.S.D. 1975), aff'd, 541 F.2d 1275 (8th Cir. 1976); United States v. Jaramillo, 380 F. Supp. 1375 (D. Neb. 1974), appeal dismissed, 510 F.2d 808 (8th Cir. 1975). But see People v. Burden, 288 N.W.2d 392 (Mich. Ct. App. 1979) (applying exclusionary rule to drug investigation in which a member of the U.S. Air Force participated with the approval of his commander), rev'd, 303 N.W.2d 444 (Mich. 1981); Taylor v. State, 645 P.2d 522 (Okla. Crim. App. 1982) (applying the exclusionary rule for a PCA violation as a result of a military police officer's active participation in a search and arrest). [58.] See Jaramillo, 380 F. Supp. 1375 (D. Neb. 1974) (directed verdict on charge of obstruction of law enforcement officers for defendants because the military's involvement in response to civil disorder raised reasonable doubt whether the officers were in lawful performance of their duty), appeal dismissed, 510 F.2d 808 (8th Cir. 1975). [59.] Ch. 646, 62 Stat. 983 (1948) (codified as amended at 28 U.S.C. ? 2674 (1994)). [60.] Wrynn v. United States, 200 F. Supp. 457, 465 (E.D.N.Y. 1961) (finding a PCA violation where Air Force helicopter and pilots were used to search for civilian prison escapees and plaintiff was injured when the helicopter landed). [61.] PCA Hearing, supra note 13, at 10 (statement of Edward S.G. Dennis, Jr., Chief, Narcotics and Dangerous Drug Sec., Crim. Div., U.S. Dep't of Justice) (citing REPORT OF THE TASK FORCE ON EVALUATION OF AUDIT, INSPECTION AND INVESTIGATIVE COMPONENTS OF THE DEP'T OF DEFENSE 197 (1980)). [62.] See infra notes 117-20 and accompanying text. [63.] See PCA Hearing, supra note 13, at 12; H. REP. NO. 97-71, pt. 2, at 3 (1981); see also INTERNATIONAL & OPERATIONAL LAW DEP'T, THE JUDGE ADVOCATE GENERAL'S SCHOOL, U.S. ARMY, DOC. NO. JA422, OPERATIONAL LAW HANDBOOK 22-2 (1995) [hereinafter ARMY LAW HANDBOOK]. [64.] 18 U.S.C. ? 1385 (1994). [65.] Id.; see DOYLE, supra note 53, at 12 (stating that "case law does not definitively answer the question of what constitutes use to `execute the laws'"). [66.] 18 U.S.C. ? 1385. The Air Force was expressly included under the PCA when Congress codified Title 10 of the U.S. Code in 1956. United States v. Walden, 490 F.2d 372, 375 n.5 (4th Cir. 1974); see also supra note 52. The inclusion was natural because the Air Force was originally part of the Army as the Army Air Corps. Walden, 490 F.2d at 374-75, 375 n.5. Even prior to 1956, Congress included the Air Force under the PCA. See National Security Act of 1947, ch. 343, ?? 207(a), 208(a), 305(a), 61 Stat. 495, 502-04, 508. [67.] See United States v. Yunis, 924 F.2d 1086 (D.C. Cir. 1991) (failing to extend PCA prohibitions to the Navy where defendant was transported to the U.S. on a Navy vessel); Schowengerdt v. General Dynamics Corp., 823 F.2d 1328 (9th Cir. 1987) (same), cert. denied, 503 U.S. 951 (1992). But see Walden, 490 F.2d at 375 (noting that failure to include the Navy in the text of the PCA does not evince congressional approval of the Navy's use to enforce civilian laws), cert. denied, 416 U.S. 983 (1979); People v. Caviano, 560 N.Y.S.2d 932, 936 & n.1 (N.Y. Sup. Ct. 1990) (finding the PCA applicable to the Navy); People v. Blend, 175 Cal. Rptr. 263, 267 (Cal. Ct. App. 1981) (stating that the PCA "applies to all branches of the federal military"). The first version of the Act would have included "any part of the land or naval forces." 7 CONG. REC. 3586 (1878). The Navy was possibly dropped to avoid a challenge under House rules for germaneness because the law was in an army appropriations bill. Walden, 490 F.2d at 374; DOYLE, supra note 53, at 15; see 7 CONG. REC. 3845 (1878) (chronicling a the debate under the House rule for germaneness). Although it is unlikely, another suggestion is that the failure to include the Navy was a drafting error. PCA Hearing, supra note 13, at 22 (comments of Rep. William J. Hughes). [68.] U.S. DEP'T OF DEFENSE, DIRECTIVE NO. 5525.5, DOD COOPERATION WITH CIVILIAN LAW ENFORCEMENT OFFICIALS encl. 4, at 4-6 (Jan. 15, 1986) (extending the PCA's application to the Navy and Marine Corps "as a matter of DoD policy") [hereinafter DOD DIR. 5525.5], available at (visited Jan. 1, 1996); SECRETARY OF THE NAVY, DEP'T OF THE NAVY, INSTRUCTION NO. 5820.7B, COOPERATION WITH CIVILIAN LAW ENFORCEMENT OFFICIALS, OFFICE OF THE SECRETARY 4 (Mar. 28, 1988) (same) [hereinafter SECNAVINST 5820.7B]. Interestingly, the Code of Federal Regulations included a section applying the PCA to the Navy and Marines which substantially followed the DoD Directive, compare DoD Cooperation with Civilian Law Enforcement Officials, 32 C.F.R. pt. 213 (1992) with DOD DIR. 5525.5, supra, but the Defense Department repealed the section in 1993 because it had "served the purpose for which [it was] intended and [is] no longer valid." 58 Fed. Reg. 25,776 (Apr. 28, 1993) (repealing 32 C.F.R. pt. 213). [69.] 14 U.S.C. ? 1 (1994); DOYLE, supra note 53, at 16. [70.] 14 U.S.C. ? 2 (1994). As part of the drug interdiction effort, Coast Guard personnel are detailed to Navy ships to perform their law enforcement function because the Navy cannot exercise the powers of arrest or search and seizure. See 10 U.S.C. ? 379 (1994). The Department of Justice has taken the position that members from the other military branches are also not limited by the PCA when detailed to the Department of Transportation, because they are not subject to military command or charged to the military force structure. See Memorandum from William H. Rehnquist, Asst. Att'y Gen., Office of Legal Counsel, U.S. Dep't of Justice, to Benjamin Forman, Asst. Gen. Counsel (Int'l Affairs), U.S. Dep't of Defense 3 (Sept. 30, 1970) (Legality of Deputizing Military Personnel Assigned to the Department of Transportation), in PCA Hearing, supra note 13, at 562, 564. [71.] Furman, supra note 42, at 101; Sanchez, supra note 38, at 119. However, the National Guard is still limited by applicable state law when in state service. Id. [72.] DOYLE, supra note 53, at 17. [73.] United States v. Jaramillo, 380 F. Supp. 1375, 1382 (1974) (excluding the Special Operations Group of the United States Marshall Service from the definition of army under the PCA), appeal dismissed 510 F.2d 808 (8th Cir. 1975). For a full discussion of the PCA and its application to all the different elements of the military, see Furman, supra note 42, at 98-103 & 99 fig. [74.] See, e.g., cases cited supra note 67. The "any part" language applies the PCA to any unit of troops, regardless of "its size or designation." Jaramillo, 380 F. Supp. at 1379. [75.] Congress acknowledged this fact during the debates regarding the enactment of the PCA, admitting that the Act was not meant to limit a soldier as a citizen. See 7 CONG. REC. 4245 (1878) (comments of Sen. Merrimon in response to a question of whether a soldier could come to the defense of a fellow citizen being assaulted). Senator Merrimon stated in Congress: If a soldier sees a man assaulting me with a view to take my life, he is not going to stand by and see him to do it; he comes to my relief not as a soldier, but as a human being, a man with a soul in his body, and as a citizen. Id. [76.] See DOD DIR. 5525.5, supra note 68, encl. 4, at 4-6; SECNAVINST 5820.7B, supra note 68, ?9.b(4), at 7. The test has also been described as based upon the service member's conduct and upon whether civilians were subjected to military power. See DOYLE, supra note 53, at 19. [77.] See Clarence I. Meeks III, Illegal Law Enforcement: Aiding Civil Authorities in Violation of the Posse Comitatus Act, 70 MIL. L. REV. 83, 100 (1975). It would not suffice to allow the principal to achieve a goal through an agent that the principal is proscribed from achieving herself. See id. [78.] DOD DIR. 5525.5, supra note 68, encl. 4, at 406; SECNAVINST 5820.7B, supra note 68, ? 9.b(3), at 7. [79.] Furman, supra note 42, at 102-03 & n.109 (citing a 1956 opinion of the Judge Advocate General of the U.S. Army). But see SECNAVINST 5820.78, supra note 68, at ? 9.c(2) (extending the PCA prohibitions to the Navy's civilian employees). [80.] Because the PCA has been extended to the Navy and Marine Corps by regulation, see supra note 68 and accompanying text, the term "military" is used in this Note to refer to the Army, Navy, Air Force, and Marines Corps. [81.] 18 U.S.C. ? 1385 (1994); supra note 12 (text of the Act). The PCA also is limited to "willful" use, see text accompanying supra note 12, but courts have not used the term to limit the PCA's scope. DOYLE, supra note 53, at 11 n.18. [82.] See infra notes 84-103 and accompanying text. [83.] See DOYLE, supra 53, at 11-13. Courts and commentators have read the legislative history of the PCA to allow "incidental" benefits to civilian law enforcement. Id. at 13. [84.] 392 F. Supp. 916 (D.S.D. 1975) (granting a motion in limine to exclude evidence of activities of military personnel in a criminal prosecution for interfering with law enforcement officers at Wounded Knee). [85.] Id. at 921-23. [86.] Id. at 923. The defendant was charged with interfering with law enforcement officers in lawful execution of their duties. Id. at 918-19. The government filed a motion in limine to bar evidence of the loan of military equipment, of the military's presence, and of any other military involvement at Wounded Knee. Id. at 918. The judge found that evidence of a PCA violation would be admissible because it would relate to whether the federal officers acted in lawful execution of their duties, but granted the motion to bar evidence of passive involvement. Id. at 925. [87.] Ch. 314, ? 601, 47 Stat. 382, 417-18 (1932) (codified as amended at 31 U.S.C. ? 1535 (1994)). [88.] Red Feather, 392 F. Supp. at 923. [89.] 380 F. Supp. 1375, 1381 (D. Neb. 1974) (directing verdict for defendant charged with obstructing law enforcement officers at Wounded Knee because a possible violation of the PCA raised a reasonable doubt as to whether the officers acted in lawful performance of their duty), appeal dismissed, 510 F.2d 808 (8th Cir. 1975). [90.] Id. at 1379. [91.] Id. The list of material provided by the military included star parachute flares, M-16 ammunition, protective vests, sniper rifles, and unarmed armored personnel carriers. Id. [92.] See id. at 1380-81. Military observers counseled the federal authorities to substitute a shoot-to-wound policy for their shoot-to-kill policy, encouraged negotiations, and approved the request for armored personnel carriers with strict conditions on their use. Id. at 1379-80. [93.] Id. at 1381. [94.] Id. [ ]95. 419 F. Supp. 186 (D.N.D. 1975) (finding that the approach of the court in Jaramillo, see supra notes 89-94 and accompanying text, would not establish a PCA violation on the part of federal authorities), aff'd sub nom., United States v. Red Feather, 541 F.2d 1275 (8th Cir. 1976), cert. denied sub nom., Casper v. United States, 430 U.S. 970 (1977). [96.] 541 F.2d 1275, 1278 (8th Cir. 1976) (referring with approval to the McArthur test formulated in the district court), cert. denied sub nom., Casper v. United States, 430 U.S. 970 (1977). [97.] See McArthur, 419 F. Supp. at 194. [98.] Id. [99.] See id. at 194-95. [100.] 681 F. Supp. 891 (D.D.C. 1988) (denying motion to dismiss indictment of airplane hijacker who was transported to the United States on Naval vessels). [101.] Id. at 895. A prisoner under the exclusive control and authority of civilian law enforcement while being transported by the military is not subjected to military regulatory power. See id. [102.] Id. at 896. A prisoner aboard a military vessel and confined by military personnel, but in the custody of civilian law enforcement at all times, is not subjected to military proscriptive power. See id. [103.] Id. Involvement of the military which is "indifferent, passive, and subservient" to civilian law enforcement is not a PCA violation. See id. [104.] 171 F.2d 921 (1st Cir. 1948). [105.] Id. at 936. The Army arrested the defendant in Germany after World War II and then transported him to the United States to face charges of treason. Id. at 927. The defendant, a U.S. citizen, was convicted of treason for his propaganda radio broadcasts on behalf of the German government during World War II. Id. at 928-29. The Ninth Circuit followed Chandler in dealing with the arrest of Tokyo Rose and her return to the United States by military authorities. See Iva Ikuko Toguri D'Aquino v. United States, 192 F.2d 338, 351 (9th Cir. 1951), cert. denied, 343 U.S. 935 (1952). The Yunis court could have also found no PCA violation by following Chandler and D'Aquino. See Yunis, 681 F. Supp. 891 (D.D.C. 1988) (finding no violation of the PCA under the McArthur test, see supra notes 95-98 and accompanying text, where the allegedly unlawful military involvement occurred outside the United States). [106.] Huffman Interview, supra note 53. [107.] See Foreign Assistance Act of 1974, Pub. L. No. 93-559, ? 30(a), 88 Stat. 1795, 1803 (codified as amended at 22 U.S.C. ? 2420(a) (1994)). Under the Foreign Assistance Act, Congress prohibited military foreign assistance monies from being spent to "provide training or advice . . . for police, prisons, or other law enforcement forces for any foreign government or any program of internal intelligence or surveillance on behalf of any foreign government within the United States or abroad." Id.; see also International Security Assistance Act of 1978, Pub. L. No. 95-384, ? 3, 92 Stat. 730, 730 (codified as amended at 22 U.S.C. ? 2291(c) (1994)) (barring officers and employees of the United States from making arrests in foreign countries as part of drug control efforts); DOYLE, supra note 53, at 25. [108.] 18 U.S.C. ? 1385 (1994) ("except in cases and under circumstances expressly authorized by the Constitution or Act of Congress"). [109.] The statutory language recognizing constitutional exceptions in the PCA was a compromise. [110.] 7 Cong. Rec. 4686 (1878); see U.S. CONST. art. 2, ?? 2, 3. [111.] Furman, supra note 42, at 91-92; Lorence, supra note 11, at 185-91. [112.] DOYLE, supra note 53, at 20. [113.] See 343 U.S. 579, 644-45 (1952) (Jackson, J., concurring); PCA Hearing, supra note 13, at 41 n.39 (statement of Christopher H. Pyle, Professor, Mount Holyoke College). [114.] DOD DIR. 5525.5, supra note 68, encl. 4, ? A(2)(c), at 4-2. The exception permits military action to protect federal property and functions, to prevent loss of life, and to restore public order when local authorities cannot control a situation. Id. These exceptions have yet to be tested. DOYLE, supra note 53, at 21 n.29. The Office of Legal Counsel at the Department of Justice bases the exception explicitly on the President's duty to faithfully execute the laws. Memorandum from William H. Rehnquist, Asst. Att'y Gen., Office of Legal Counsel, U.S. Dep't of Justice, to Robert E. Jordan III, Gen. Counsel, U.S. Dep't of the Army 1-2 (May 11, 1970) (Authority to Use Troops to Protect Federal Functions, Including the Safeguarding of Foreign Embassies in the United States) [hereinafter Federal Functions Memorandum], in PCA Hearing, supra note 13, at 558, 559. [115.] Federal Functions Memorandum, supra note 114, at 1-2. [116.] Id. [117.] See ARMY LAW HANDBOOK, supra note 63, at 22-2. [118.] Others have implicitly recognized this distinction. See PCA Hearing, supra note , at 35-37 (comments by Christopher H. Pyle, Professor, Mount Holyoke College); Paul Jackson Rice, New Laws and Insights Encircle the Posse Comitatus Act, 104 MIL. L. REV. 109 (1984). This Note makes the distinction explicit here to illustrate that both types of exceptions have deleterious effects and should be avoided. [119.] See PCA Hearing, supra note , at 35-37 (comments of Christopher H. Pyle, Professor, Mount Holyoke College). [120.] See, e.g., PCA Hearing, supra note, at 11 (statement of Edward S.G. Dennis, Chief, Narcotics and Dangerous Drug Sec., Crim. Div., U.S. Dep't of Justice). Congress intended to clarify the boundaries of the PCA. ARMY LAW HANDBOOK, supra note 63, at 22-2. [121.] 10 U.S.C. ?? 372, 381 (1994). [122.] Id. ? 373(2). [123.] Id. ? 371. [124.] Id. ? 373(1). [125.] See supra notes 95-103 and accompanying text. [126.] See 48 U.S.C. ? 1418 (1994). This exception existed before the passage of the PCA. Act of Aug. 18, 1856, ch. 164, ? 5 11 Stat. 119, 120. Guano islands are islands rich in guano deposits. See 48 U.S.C. ? 1411 (1994). Guano is "a substance that is found on some coasts or islands frequented by sea fowl, is composed chiefly of their partially decomposed excrement, is rich in phosphates, nitrogenous matter, and other material for plant growth, and has been used extensively as a fertilizer." WEBSTER'S THIRD NEW INTERNATIONAL DICTIONARY 1007 (1986). A great rush of claims made on guano islands occurred from 1856 to 1903: 94 claims were made during that period, and 66 islands were recognized by the State Department. JIMMY M. SKAGGS, THE GREAT GUANO RUSH: ENTREPRENEURS AND AMERICAN OVERSEAS EXPANSION 200 (1994). The law continues to have relevance because the United States still maintains possession of nine of the recognized islands. Id. For an in-depth discussion of the rush to claim guano-rich islands, see generally SKAGGS, supra. [127.] See 25 U.S.C. ? 180 (1994). This exception may have also provided authority for the military presence at Wounded Knee. [128.] See 16 U.S.C. ?? 23 (detail of troops for protection of park), 78 (detail of troops to Sequoia and Yosemite Parks), 593 (protection of timber in Florida) (1994). [129.] See 18 U.S.C. ? 1751(i) (1994). [130.] See 10 U.S.C. ?? 331-333 (1994). Other exceptions to the PCA, as listed in DOD DIR. 5525.5, supra note 68, encl. 4, ? A.2(e), at 4-2 to 4-3, include: (i) 16 U.S.C. ? 1861(a) (1994) (enforcement of the Fishery Conservation and Management Act of 1976); (ii) 18 U.S.C. ?? 112, 1116 (1994) (assistance to law enforcement officers in crimes against foreign officials, official guests of the United States, and other internationally protected persons); (iii) 18 U.S.C. ? 351 (1994) (assistance to law enforcement officers in crimes against members of Congress); (iv) 22 U.S.C. ?? 408, 461-462 (1994) (actions in support of the neutrality laws); (v) 18 U.S.C. ? 831 (1994) (assistance to law enforcement officers in crimes involving nuclear materials); (vi) 42 U.S.C. ? 97 (1994) (execution of quarantine and certain health laws); (vii) 43 U.S.C. ? 1065 (1994) (removal of unlawful inclosures from public lands); (viii) 48 U.S.C. ?? 1422, 1591 (1994) (support for territorial governors if civil disorder occurs); (ix) 50 U.S.C. ? 220 (1994) (actions in support of certain customs laws); and (x) 42 U.S.C. ? 1989 (1994) (execution of certain warrants relating to enforcement of specified civil rights laws). [131.] Sanchez, supra note 38, 120 & n.13. [132.] See Jerry M. Cooper, Federal Military Intervention in Domestic Disorders, in U.S. MILITARY UNDER THE CONSTITUTION, supra note 48, at 120. [133.] See generally Kurt Andrew Schlichter, Comment, Locked and Loaded: Taking Aim at the Growing Use of the American Military in Civilian Law Enforcement Operations, 26 LOY. L.A. L. REV. 1291 (1993) (the author served as an Army National Guardsman deployed to Los Angeles during the riots); Eric Schmitt, Elite U.S. Forces Sent in to Perform a Rare Role, N.Y. TIMES, May 2, 1992, ? 1, at 8. For a discussion of the military's use in relation to civil disorder, see Cooper, supra note 132. [134.] See DOYLE, supra note 53, at 13-14. [135.] Id. [136.] See supra notes 95-103 and accompanying text. [137.] DOYLE, supra note 53, at 14. [138.] JAMES B. JACOBS, SOCIO-LEGAL FOUNDATIONS OF CIVIL-MILITARY RELATIONS 54 (1986). [139.] Id. at 55. [140.] For a thorough discussion of the use of the military in labor disputes, see JOAN M. JENSEN, ARMY SURVEILLANCE IN AMERICA, 1775-1980, at 44-45, 139-40 (1991) and Jacobs, supra note 138, at 51-76. [141.] See, e.g., Edmund L. Andrews, Army Has Trouble Building Beachhead in Disaster Zone, N.Y. TIMES, Aug. 31, 1992, at A10; Edmund L. Andrews, Urgency Is Growing: 4 Days After Hurricane, Thousands Still Need Food and Shelter, N.Y. TIMES, Aug. 28, 1992, at A1; see also Stephanie Nano, Flood Area Hails Tardy Old Friend: The Sun, S.F. EXAMINER, July 19, 1993, at A1 (mentioning President Clinton's offer of federal troops to help with cleanup after floods in the Midwest); Jeffrey Schmalz, Troops Find Looting and Devastation on St. Croix, N.Y. TIMES, Sept. 22, 1989, at A22 (reporting the use of federal troops on St. Croix after Hurricane Hugo). [142.] Huffman Interview, supra note 53. [143.] This Note does not project immediate doom nor suggest that the armed forces or its members would consider a military coup or improperly influence the civilian government. Indeed, within the U.S. Army officer corps there exists "an implicit--one could almost say instinctive--acceptance of the civil power's superiority to the military in government." Edward M. Coffman, The Army Officer and the Constitution, PARAMETERS, Sept. 1987, at 2, 2. This Note argues that there are good reasons for the policies behind the PCA, policies that should not be discarded by the exigencies of the moment. Id. [144.] See supra notes 5-7 and accompanying text. [145.] H.R. 1224, 104th Cong., 1st Sess. (1995). [146.] See supra note 7 and accompanying text. [147.] S. 735, 104th Cong., 1st Sess. (1995). [148.] See supra note 6 and accompanying text. [149.] See supra notes 117-20, 126-33 and accompanying text. [150.] See supra notes 117-25 and accompanying text. [151.] See PCA Hearing, supra note , at 29-30 (comments by Rep. William J. Hughes). [152.] Soldiers do receive training in intermediate levels of force for peacekeeping missions, but the main focus is on how and when to use deadly force as part of the wartime rules of engagement. Huffman Interview, supra note 513(LANE Training provides soldiers with examples of hostile acts which can be responded to without waiting to be fired upon which consist of simulation exercises in a field setting); see also Anthony DePalma, Canada Assesses Army: Warriors or Watchdogs?, N.Y. TIMES, Apr. 13, 1997, ? 1, at 4 (noting increase of U.S. training for peacekeeping); Mark S. Martins, Rules of Engagement for Land Forces: A Matter of Training, Not Lawyering, 143 MIL. L. REV. 3, 27 (1994). This focus flows directly from the military's responsibility "to fight or be ready to fight wars should the occasion arise." Toth v. Quarles, 350 U.S. 11, 17 (1955), cited in Dunlap, supra note 4, at 357 n.119. To fulfill that responsibility, the Army has challenged itself to "[i]mprove[] lethality and readiness" in the 21st century. Dennis J. Reimer, Soldiers Are Our Credentials, MIL. REV., Sept.-Oct. 1995, at 4, 13 fig.5 (at the time of this writing the author was U.S. Army Chief of Staff). [153.] See generally Martins, supra note 152. It is interesting that we want the military to take on some police functions, yet we will not let that same military train foreign police or even give them advice. See Foreign Assistance Act of 1974, Pub. L. No. 93-559, ? 30(a), 88 Stat. 1795, 1803 (codified as amended at 22 U.S.C. ? 2420(a) (1994)); see also supra note 107 and accompanying text. [154.] Jesse Katz, A Good Shepherd's Death, L.A. TIMES, June 21, 1997, at A1 (chronicling the debate about troops use near the border as the danger is illustrated by this killing); Border Killing Brings Criticism of Military Role, CHICAGO TRIB., May 23, 1997, at 15. [155.] See supra notes 22-43 and accompanying text. [156.] See Laird v. Tatum, 408 U.S. 1 (1972). [157.] Dunlap, supra note 4, at 344 (emphasis removed). [158.] Id. at 343 (emphasis removed) (paraphrasing President Eisenhower's Farewell Address). [159.] Id. at 344 (quoting ALLAN R. MILLET, THE AMERICAN POLITICAL SYSTEM AND CIVILIAN CONTROL OF THE MILITARY 2 (1979)). [160.] Id. at 344 n.13 (quoting MILLET, supra note 159, at 2). [161.] GOODPASTER & HUNTINGTON, supra note 21, at 22 (citing David R. Segal et al., Convergence, Isomorphism, and Interdependence at the Civil-Military Interface, J. POL. & MIL. SOC., Fall 1974, at 157ff). A military establishment . . . that encompasses many nonmilitary functions and that operates in a civilianized manner is likely to be more autonomous--freer from civilian contacts and, potentially, civilian control--than a military establishment that is purely military and that, precisely because of its specialization, is dependent upon civilian society for support. Id. (quoting Segal, supra) (footnote omitted). [162.] See McGee, supra note 3, at A30. "[The military's involvement] should [have been] a temporary stopgap, but it's been institutionalized." Id. (quoting Lawrence J. Korb, Asst. Sec'y of Defense under President Reagan) (second set of brackets in original). [163.] See 134 CONG. REC. 11,643, at 11,644 (1988) (comments of Sen. Daniel P. Moynihan) (stating that "100 of the [illegal aliens crossing the Mexican border into the U.S.] could bring across a year's supply and more of Mexican heroin for the American market"); see also PETER REUTER ET AL., RAND CORP., SEALING THE BORDERS: THE EFFECTS OF INCREASED MILITARY PARTICIPATION IN DRUG INTERDICTION 123 (1988) (stating that "a single cargo plane, fully loaded, could supply the nation's current demand [for cocaine] for a year"); Michael H. Abbott, The Army and the Drug War: Politics or National Security?, PARAMETERS, Dec. 1986, at 95, 96 (noting that drugs are second to petroleum, in monetary value, as the largest U.S. import); Christopher S. Wren, Why Seizing Drugs Barely Dents Supply, N.Y. TIMES, Dec. 15, 1996, ? 4, at 4 (reporting that increased seizures of illegal drugs do not increase costs to the drug user). [164.] Cf. Thomas E. Ricks, The Military: The Great Society in Camouflage, ATLANTIC MONTHLY, Dec. 1996, at 24, 38 (stating that "murky new missions . . . may be chipping away at the Army's sense of itself"); DePalma, supra note 152. [165.] See McGee, supra note 3, at A30 ("[T]he open-ended nature of the military's commitment is the greatest potential hazard." (interview with Lawrence J. Korb, Assistant Sec'y of Defense under President Reagan)). At least one critic sees the expansion of the Junior Reserve Officer's Training Program, the large number of retired military personnel working as teachers, and the appointment of a retired general as the "drug czar" and another as head of the D.C. school system as steps towards militarization and a decline in civilian control of the military. Courtland Milloy, Overruling Civilian Rule, WASH. POST, Nov. 13, 1996, at B1 (describing the views of Sam Smith, editor of the Progressive Review). [166.] See, e.g., Clifford Krauss, The Somalia Mission: Congress; Clinton Gathers Congress Support, N.Y. TIMES, Oct. 8, 1993, at A14 (noting Congress's approval of President Clinton's plan to limit the goals of U.S. forces in Somalia and commit to an end date for troop deployment); Elaine Sciolino, Loosening the Timetable for Bringing G.I.'s Home, N.Y. TIMES, Nov. 17, 1996, ? 4, at 3 (discussing military doctrine of no overseas deployment without a specific timetable for withdrawal); Elaine Sciolino, U.S. Narrows Terms for Its Peacekeepers, N.Y. TIMES, Sept. 23, 1993, at A8 (noting anxiety within Clinton administration over "open-ended peacekeeping missions in Somalia and Bosnia"); Senatorial Passion: U.S. Interest or Deadly Quagmire?, N.Y. TIMES, Dec. 14, 1995, at A15 (quoting from Senate debate about sending troops to Bosnia and the fear of an open-ended mission). But see Sciolino, Loosening the Timetable for Bringing G.I.'s Home, supra, at 3 (reporting criticism of establishing a clear exit strategy for troops deployed overseas). [167.] Reimer, supra note 152, at 9 (stating that "readiness and training . . . [are] the reason the Army exists"). [168.] See Department of Defense Authorization Act, 1982, Pub. L. No. 97-86, ? 905(a)(1), 95 Stat. 1099, 1116 (1981) (codified as amended at 10 U.S.C. ? 376 (1994))); PCA Hearing, supra note 13, at 16 (statement of William H. Taft IV, Gen. Counsel, U.S. Dep't of Defense). [169.] To be effective the military would have to be able to say "no," but it is actually marketing itself with a 55-page pamphlet to local law enforcement. See McGee, supra note 3, at A30. [170.] "[R]eadiness is a tough, continuous job." Smith, supra note 42, at 91 (emphasis added). The most "productive"--and full-time--purpose of the military during peacetime is the deterrence of war. Id. As Elihu Root said, the goal of the military is "[n]ot to promote war, but to preserve peace through intelligent and adequate preparation." Id. [171.] See Ricks, supra note 164, at 38 (noting the need for the Army to justify its existence "[a]rguably for the first time in its existence"); William Rosenau, NonTraditional Missions and the Future of the U.S. Military, FLETCHER F. WORLD AFF., Winter/Spring 1994, at 31, 32 (suggesting that non-military missions could protect the infrastructure of the Army by giving it "a new organizational vision"). [172.] These new missions occur at a time when the Army has increased its overseas operational deployments by 300%, but has been forced to accommodate for diminishing resources. See Reimer, supra note 153, at 5, 7; see also Ronald B. Flynn, The National Guard Drug Interdiction Mission: A Circumvention of Posse Comitatus? 21 (Apr. 2, 1990) (unpublished U.S. Army War College Military Studies Program Paper, available through Defense Technical Information Center) (contending that the military's involvement in the counter-drug effort detracts from training readiness as it relates to war-fighting). But see Dale E. Brown, Drugs on the Border: The Role of the Military, PARAMETERS, Winter 1991-92, at 50, 50 (describing military involvement in counter-drug efforts as "valuable, real-world training for the participating units"). [173.] Mike O'Connor, Does Keeping the Peace Spoil G.I.'s for War?, N.Y. TIMES, Dec. 13, 1996, at A3 (discussing the need for approximately 18 months of rest and retraining to reestablish the basic military readiness of the Army troops who were deployed in Bosnia); see also DePalma, supra note 152. The work of U.S. soldiers in Bosnia "runs counter to their traditional training" and their skills are perishable. Id. One sergeant, a tanker, spends each day checking identification at a checkpoint. Id. As a result of mostly working in small groups, the soldiers' ability to work in large coordinated groups suffers. Id. [174.] Dunlap, supra note 17; see Ricks, supra note 164, at 38. [175.] In fiscal year 1990, forty-eight percent of all AWACS, radar surveillance planes, flying hours worldwide were devoted to counter-drug efforts. Brown, supra note 172, at 53. In the case of the National Guard, 532,899 man-days and 5155 missions were devoted to the counter-drug effort. Id. [176.] See Alex Prud'Homme, Phantom Army: For the Most Part the National Guard Fought Well in the Gulf but Some Outfits, Plaqued by No-Shows and Poor Training, Never Got to the Front, TIME, June 10, 1991, at 18; see also DARPA Seeks to Upgrade National Guard Training, DEF. & AEROSPACE ELECS., Feb. 22, 1993 (noting the National Training Center did not believe that National Guard troops called up for the Persian Gulf War were not combat ready), available in 1993 WL 289339; National Guard: Peacetime Training Did not Adequately Prepare Combat Brigades for Gulf War, GEN. ACCT. OFF. REP. & TESTIMONY, Dec. 1, 1991 (referencing GAO report on lack of readiness of National Guard troops), available in 1991 WL 2659334. This turns on the use of the National Guard in nontraditional missions, when their active service time should be spent honing their fighting capabilities so that they can back up the regular army in time of war. [177.] Dunlap, supra note 17, at 69. [178.] See McGee, supra note 3, at A30 (discussing the military's counter-drug headquarters). This "drug command" is referred to as Joint Task Force Six ("JTF-6"). JTF-6 circulates a 55-page pamphlet to local law enforcement, which in effect markets Green Berets, Navy SEAL teams, and other services. Id. [179.] Border duty consists of patrolling the U.S. border to stop illegal entry and enforcing customs laws at border entry points. [180.] To fulfill this mission, it is entirely reasonable for the military to begin civilian surveillance again under the same rationale used in the 1960s. See supra note 16 and accompanying text. [181.] See Wren, supra note 163, at 4 (noting that drug trafficking does not seem to follow basic economic principles). Increased seizures have not increased the cost to users of illegal drugs. Id. Increased interdiction results in increased seizures, but those seizures only increase transportation expenses; the drug dealers are hassled, but the drug flow continues. Id. The 1996 price of cocaine is only one-fifth of the 1981 price, and heroin is less than half of its 1980 price. William March, Drugs Ignore Politicians, TAMPA TRIB., Sept. 28, 1996, at 1. In 1981, Congress began involving the military in the drug war. See supra note 9-10 and accompanying text. Obviously, military involvement and increased seizures have not helped to turn the tide in the drug war; it is being lost. [182.] See Editorial, False Choices on Terrorism, N.Y. TIMES, Apr. 30, 1995, ? 4, at 14 (suggesting that the F.B.I. should receive more funding and use it to improve the training of agents); Editorial, Washington's Undeclared War on Drugs, ST. LOUIS POST-DISPATCH, Dec. 8, 1996, at 2B (suggesting that the drug war is best handled by agencies other than the military). These resources need to be developed when civilian law enforcement is not able to take full advantage of information provided by the military. See Ted Waronicki, Letter to the Editor, War on Drugs Must Begin in the Home, TAMPA TRIB., Oct. 20, 1996, at 3 (noting that civilian law enforcement agencies attempted to apprehend only a small percentage of suspicious aircraft identified by the military). [183.] See supra Part II. [184.] See supra Part IV. [185.] See supra notes 47-52 and accompanying text. [186.] See supra notes 110-13 and accompanying text. [187.] See supra note 111 and accompanying text. [188.] See Kathleen M. Sullivan, Constitutional Constancy: Why Congress Should Cure Itself of Amendment Fever, 17 CARDOZO L. REV. 691, 700 (1996). "It would have been an unwise attempt to provide, by immutable rules, for exigencies which, if foreseen at all, must have been seen dimly, and which can be best provided for as they occur." Id. (quoting McCullogh v. Maryland, 17 U.S. (4 Wheat.) 316, 415 (1819) (Marshall, C.J.)). [189.] See supra note 113 and accompanying text. [190.] See Sullivan, supra note 188, at 700-01. [191.] See generally Sullivan, supra note 188 (discussing arguments against constitutional amendments in general); cf. Ronald L. Goldfarb, The 11,000th Amendment: There's a Rush to Amend the Constitution, and It Shows No Signs of Letting Up, WASH. POST (Nat'l Weekly Ed.), Nov. 25-Dec. 1, 1996, at 22, 22-23 (noting that over 11,000 amendments have been proposed in Congress since 1789--more than one a week). As Professor Kathleen Sullivan has said, "[T]here are strong structural reasons for amending the Constitution only reluctantly and as a last resort. This strong presumption . . . has been bedrock in our constitutional history, and there is no good reason for overturning it now." Sullivan, supra note 188, at 694. [192.] See GOODPASTER & HUNTINGTON, supra note 21, at 26. [193.] Title 10 defines the term "armed forces" to "mean the Army, Navy, Air Force, Marine Corps, and Coast Guard." 10 U.S.C. ? 101(a)(4) (1994). [194.] See supra text accompanying note 12 for the current text of the PCA and note the use of the same phrasing. [195.] This definition is taken nearly verbatim from 42 U.S.C. ? 5122(1) (1994) (regarding disaster relief). [196.] This language is derived from 15 U.S.C. ? 2655(c) (1994). [197.] See supra note 53 and accompanying text. [198.] The statutes that currently cover embezzlement or misappropriation of federal services, funds, or equipment would fill the void left by repeal of the PCA from Title 18. [199.] See S. 1, 94th Cong. 1st Sess., tit. II, pt. G. The text of the bill proposed the following text as a re-enactment of the PCA in 10 U.S.C. ? 127: Whoever, except in cases and under circumstances expressly authorized by the Constitution or Act of Congress, knowingly uses any part of the Army, Navy, or Air Force as a posse comitatus or otherwise to execute the laws is guilty of a Class A misdemeanor. Nothing in this section shall be construed to affect the law enforcement functions of the United States Coast Guard. Id., cited in James P. O'Shaughnessy, Note, The Posse Comitatus Act: Reconstruction Politics Reconsidered, 13 AM. CRIM. L. REV. 703, 711 n.43 (1976). [200.] See text accompanying supra note 12. [201.] See supra notes 66-67 and accompanying text. The 94th Congress also considered similarly broadening the coverage of the PCA. See supra note 201. [202.] See supra note 68 and accompanying text. [203.] A regulation that is not codified does not require congressional approval to be changed. Cf. supra note 68 (noting the withdrawal of a section similar to the DoD Regulations from the Code of Federal Regulations presumably to make the regulations easier to change). [204.] A similar proposal was made by Professor Christopher Pyle. See PCA Hearing, supra note 13, at 42. What is at stake is nothing less than a consistent theory of the proper role of armed forces in a democratic republic. That theory . . . envisions the military as a back-up force, operating under its own command, prepared to deal with large scale emergencies, beyond the capabilities of civilian authorities, not for the purpose of executing civilian laws, or even assisting in their execution, but for restoring order, saving lives, and protecting property from natural or man-made disasters. Id. (emphasis added) (statement Christopher H. Pyle, Professor, Mount Holyoke College). [205.] See McGee, supra note 3, at A30. [206.] See PCA Hearing, supra note 13, at 16 (testimony of William H. Taft, Gen. Counsel, U.S. Dep't of Defense) (quoting an Aug. 6, 1979 report on the PCA by the Departments of Justice and Defense). [207.] Id. [208.] This Note does not suggest that the armed forces are any less dedicated, but these domestic problems are not, and should not be, their primary mission. When the need arises, the armed forces must be able to direct their resources towards their primary mission; thus, they cannot give these domestic problems the full attention that they need. From shovland at mindspring.com Tue Jul 19 19:05:32 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Tue, 19 Jul 2005 21:05:32 +0200 (GMT+02:00) Subject: [Paleopsych] threats in groups Message-ID: <14342610.1121799933391.JavaMail.root@wamui-andean.atl.sa.earthlink.net> Groups eventually recognize that someone in a leadership position is acting contrary to the interests of the group, but not always before a lot of damage is done, for example Enron. -----Original Message----- From: Michael Christopher Sent: Jul 18, 2005 10:10 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] threats in groups >>What events signal to individuals that the functioning of their group may be compromised? Because groups enhance individual success by providing members with valuable resources, members should be attuned to potential threats to group-level resources such as territory, physical security, property, economic standing, and the like.<< --Which brings up the issue of what happens when group members who are threatening to the long term health of the group are rewarded for a single contribution, as in an abusive male rewarded for being able to punish an enemy, or a member of a corporation who brings in a lot of money while undermining the integrity of the whole. Or a group that colludes with each other, forgives each other's "sins" in order to stay powerful as a group, to the detriment of the tribe. Only when the tribe can meet the needs that corrupt members meet in order to avoid punishment can dysfunctional groups be dissolved. Otherwise you have people who are very dysfunctional carrying the flag/bible, posturing as defenders of the tribe to avoid being seen as a problem themselves. michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From checker at panix.com Tue Jul 19 19:29:53 2005 From: checker at panix.com (Premise Checker) Date: Tue, 19 Jul 2005 15:29:53 -0400 (EDT) Subject: [Paleopsych] BBC: Co-opting the creative revolution Message-ID: Co-opting the creative revolution. Towards an internet ruled government http://news.bbc.co.uk/2/hi/technology/4683385.stm Digital technology is providing people with the tools to produce and share content like never before, and it is set to throw the relationship between them and institutions into turmoil, say experts. "I am predicting 50 years of chaos," says leading digital thinker Clay Shirky. "Loosely organised groups will be increasingly given leverage. "Institutions will come under increasing degrees of pressures and the more rigid they are, the more pressures they will come under. "It is going to be a mass re-adjustment," he says, addressing delegates at the TED (Technology, Entertainment and Design) conference in Oxford, UK. TED brings together experts in design, technology, and entertainment to share their ideas about the future. In our hands At a time when companies are grappling with how to make cool new stuff, it is the rising tide of creative collaborators working through the channel and tools of the net that is showing the way ahead. This is not a new trend, explains Tony Blair's favourite political analyst and author Charles Leadbeater. The mountain bike was created out of the frustrations of a few northern Californians who were dissatisfied with ordinary bikes and racers. They took what they wanted from those to create something entirely different. This was 10 to 15 years before the big companies saw the commercial value, explains Mr Leadbeater. About 65% of bike sales in the US are mountain bikes now. "It is when the net combines with these passionate consumers that you get the explosion of creative collaboration," says Mr Leadbeater. "Out of that you get the need for better organisations; how do you organise yourself without organisation?" It is indeed a challenge getting a very large, distributed group of people to work in an effective, valuable, collaborative way, says Mr Shirky. But efforts such as the online encyclopaedia Wikipedia show that people power can and does work. It is not just a community that runs wild. There are volunteers and voting systems in place to ensure accuracy and decency. Its founder, Jimmy Wales, is the "monarch" who will take tough decisions if need be, though this has not happened very often. Power of the masses Just because something is created by anyone with a net connection and some sort of know-how, existing outside of formal, does not make it any less accurate or useful, say the digital thinkers of our times. "For the first time since the industrial revolution, the most important means and components of core economies are in the hands of the population at large," explains Yale Law professor Yochai Benkler. Computation, in other words, is in the hands of the entire population. And those computing tools are getting easier to use, more approachable, as well as more powerful. If you are a games community with a million players, you only need one percent to be co-developers. Imagine that working in education, or the NHS Charles Leadbeater Blogging, services, tag-based applications to help people find content, peer-to-peer ways of distributing content, grid computing, open source software, are all examples of how this is happening online now. Ordinary people can become photographers whose images are used all over the world, for instance. Tagging them with keywords helps others find and classify them usefully. This act of tagging does not require swathes of trained librarians. What these power tools enable is the ability for people with small ideas to make them real, share them, and let them grow. But this philosophy has many big companies jittery. There are obstacles to overcome. This is what creativity is about, but Mr Leadbeater argues there is a further challenge in thinking about how creativity comes about. It is not about creative people hired by big companies wearing colourful baseball caps, thinking up whacky ideas in an office with grass for a carpet, he says. Professor Benkler describes this as a new "transactional framework" if you speak economics. In other words, it is essentially the first system of social production, sharing and exchange for a long time that is actually making companies sit up and listen, because they have to. Money and communities Big companies are now seeing the economic opportunity of this kind of open, collaborative production, by the people, making social production a fact and not just a fad. Mr Leadbeater's job is to work out how these new ways of working can be embraced as part of public policy, where people get something back - not necessarily in monetary form - for what they contribute. "It is about companies built on communities; the company provides the community with the tools it needs," he says. "If you are a games community with a million players, you only need one percent to be co-developers. Imagine that working in education, or the NHS. "Some sort of middle ground is going to be the most productive." As for the next 50 years, it is up to the generation which has no idea what a real library looks like inside to decide how it will work out, says Mr Shirky. Our entire approach to patents and intellectual property and innovation so far has been based on the idea that the inventor tells us what something is for, says Mr Leadbeater. That has to change; patents and copyright have to move with that and be about orchestrating that creativity rather than smothering it. To Mr Shirky, there is one thing we can do now: "Since we can see it is coming, we might as well get good at it." From checker at panix.com Tue Jul 19 19:29:59 2005 From: checker at panix.com (Premise Checker) Date: Tue, 19 Jul 2005 15:29:59 -0400 (EDT) Subject: [Paleopsych] TCS: Can You Breed for Genius? Message-ID: Can You Breed for Genius? http://www.techcentralstation.com/071805C.html By Nick Schulz Published 07/18/2005 Editor's note: David Plotz is the author of [26]The Genius Factory: The Curious History of the Nobel Prize Sperm Bank. He recently sat for an interview with Nick Schulz. Schulz: How did you come to write this book? Plotz: I knew of the existence of the Nobel Prize Sperm Bank and I was interested in what had happened to it. It opened in 1980 and was shut down in 1999, and more than 200 kids had been born from it. It was an incredibly radical experiment in human genetic engineering, yet nobody knew who these kids were, how they turned out., or who the donors were. When the founder of the Bank (Robert Graham) died, the records were sealed and employees were not talking or were dead. So there was no obvious way to find out who the donors were, who the children were, or how their lives had turned out. Schulz: And some television producers and other reporters from time-to-time had gone to try to look into it and had run into a brick wall. In a sense, this book is a unique journalistic enterprise. Do you think that this book could have been written before the advent of e-mail and the Internet? Plotz: I don't think it could have been. And there are a few reasons why. Other newspaper, TV and magazine reporters had all gone to Graham and tried to report it out, but there were only two families that were willing to be public about their involvement. Essentially, you were trying to tell the story of this whole bank of 215 kids of this grant experiment based on none of the donors and two children born from it. And, so, it was just a black hole. It was a big mystery with no obvious way to crack it. So when I set out to do this at Slate, Jack Shafer [an editor at Slate] offered me some insight. He said that we couldn't find them because the donors and children were scattered all over the world and didn't know each other. There was nothing to connect them to a community so they were just trying to keep secrets so that their neighbors wouldn't know about it. It was a fundamentally private thing that they had undertaken so there was no way to find them -- but they could find us. And we wanted to harness the network power of the Internet to do this. I wrote a story in early 2001 outlining the bare bones of what was known about the Bank, which was very little. I said that if you want to tell us what happened, please contact us. I started with the idea of using readers as sources. A regular newspaper or TV reporter can't do that because at The New York Times you couldn't publish a story saying: "We don't know anything about this, contact us if you know something about this." You can't put that in The New York Times. It would look really weird. It would look like you weren't doing your job. But we could do it in Slate because it's a newer enterprise and people use the Internet in a different way. Internet journalism works in a different way, and Internet journalism has this fundamentally collaborative quality to it. And so we were taking that collaborative policy to its furthest degree by saying, "You are readers, we don't know the story, but you might because you might have been the donor or might have been an employee or you might be a child." And what's more, we were counting on the repeater effect of the web -- that this would get posted to places where people involved with sperm banks might read it and that it would have a life on the Internet search engine and wouldn't just be up there for a day. And what happened was at first one donor-entrepreneur, Edward Burnham, saw it and then I wrote a story about him, and then some other donors saw that and wanted to respond to him. Meanwhile, some of the mothers and children saw it and they wanted to get in touch with me. And, eventually, it got posted on Slate and also on MSN, which has a much wider readership than Slate does. Millions of people saw the story in one form or another and they started contacting me. Each time I would write another story about another family or another donor, more people would contact me. And it continued to echo and echo for years. I mean, just everyday practically, some e-mail from somebody connected with this Bank. The most interesting effect of it was in the case of donor White, one of the main stories in the book, where I wrote a story about a girl and her mother who were searching for their donor, donor "White," and about their search. They had reason to believe that donor White was looking for them. And we posted on the web and didn't hear anything for 15 months, and then donor "White," using a search engine for the very first time, had typed in "Genius Sperm Bank" and, low and behold, one of their first references to come up was the story I had written about this mother searching for donor White and then about the sperm bank. One other final reason I think the Internet and e-mail in particular worked, and made this possible, was that it was very important to me going into this project that I wasn't going to try to destroy people's private lives -- that I wanted to tell the stories of the Bank, but these were basically private citizens and I didn't want to expose their lives and embarrassing secrets to the world. They came to me with a guarantee of privacy. In fact, every single person I talked to said they would never have participated if I were publishing their real name. But the anonymity and distancing of e-mail allowed people to communicate with me in a manner where they felt protected because they were able to contact me by e-mail and feel me out to make sure that I wasn't going to mess with them. And that impersonality in a way allowed the intimacy that would follow. Schulz: You, the author, end up being a critical part of the dramatic narrative of the book. There is almost no way to tell the story without that being the case, is that right? Plotz: That's right. And this caused me anxiety and discomfort, and it was a real conundrum. When I set out to do the project with an intellectual interest in this, to answer the question: Can you breed for genius and what happens when you try to do it and what are the results? What does it tell us about our efforts in the future to breed for genius? It was an intellectual interest, and the people who were contacting me were sort of interested in talking to me about that sometimes. But, and this was something that I only realized after the process was already underway, the reason they were contacting me was that I had opened a door that they thought was permanently shut. When they had been involved in the Bank, as far as they knew, everything was double anonymous-- the donors didn't know who the children were born from the sperm and the children and the mothers didn't know who the donors were. And they assumed they would never get a chance to meet each other and that the children would never get a chance to meet siblings. And what had happened was these folks who were reading these articles realized that there were children who wanted to meet their fathers; there were donors who wanted to meet their children; there were children who wanted to meet siblings. And then, all of a sudden, I had given them a way to do it. And I hadn't even thought about this. It hadn't even occurred to me that I was going to become this middleman. But I became, by total accident, this middleman where I knew the identity of donor "Coral" let's say or donor "Turquoise's," and donor "Turquoise's" children were contacting me, and wanted to meet donor "Turquoise" and donor "Turquoise" himself wanted to meet his children. And so you have a situation where, well, should I put them in touch with each other or what should I do? And my decision was that, first of all, it was a great narrative to have these people get to meet and fulfill their longing, but it is a human thing. It was people who wanted to know each other, and they had good reasons to want to know each other, and they were making the decisions. The donors were adults and then these mothers or the children were adults and had come to this decision for good, rational reasons, and didn't seem to be out to do it to gain money or anything like that. They were out to do it because they had an emotional void that they needed to fill. And so I realized that I had a moral obligation, which happened to coincide with the kind of fortunate journalistic opportunity to put these people together. And so I did. And the results are not always -- in the book, the results are bittersweet. There are times when the knowledge of children getting to know their biological fathers can be incredibly rewarding as it is in one very large case that I handled in the book. Schulz: Donor "White" and "Joy." Plotz: Right. The donor "White" and "Joy," and then there is the case of donor "Carl" and "Tom," where the knowledge and the meeting of the father and son is in many ways a disappointment. But it is a disappointment that is profound for "Tom" and it has meant a lot to him. The knowledge that even though I think it hasn't turned out the way he imagined it would at the beginning when he thought Jonas Salk was his father, it's really important for him to know who he is. And he feels like he finally does even if the result isn't exactly right. And I do think they there are very real journalistic ethics questions about what I was doing. And I think what I tried to do in the book was be completely open about what my involvement was. To make it very clear to the readers, here is what I am doing, here is why I am doing it so that they can judge for themselves, "Is this right or wrong and should I listen to this person?" Schulz: You mentioned that part of your own interest in looking into this story was to unpack what it was that was driving this genetic engineering for smarter kids and a smarter race. There's all sorts of interesting history that you go into with positive eugenics and the history of eugenics and things like that. But it seems that you discovered, in looking into this, that Americans seem to value some things as highly as or more highly than intelligence, is that right and did that surprise you? Plotz: That's right. One of my favorite aspects of the Nobel Prize Sperm Bank is that, in fact, the Nobel Prize Sperm Bank ended up without any children of Nobel Prize donors because Robert Graham started out with some Nobel Prize donors in 1980, and then after this bad publicity these Nobelists didn't want to be involved. But he also realized that Nobelists were too old to be good donors. And, more importantly, Nobelists were a product that women didn't exactly want. They were too much brainiac for the customers who were coming to him. And Graham said, on one occasion, "[T]hose Nobelists, they could never win a basketball game." And he realized that, in fact, the pure brainpower was not a commodity that the customers wanted, and he was very customer focused. He was a very successful business man who turned to sperm banking, and he realized he had to sell to this customers. And the things that mattered to them as much as brain power, which mattered a great deal, were height and good health and good temperament and good looks. And so the women who were coming to Robert Graham's Bank, their first question wouldn't be: "How smart is he?" Their first question would be: "How tall is he?" And then it would be: "What does he look like and is he nice?" And, so, those questions it turned out mattered as much to women as the brain question. I think what we've ended up with has been this consumer revolution in sperm banking, which I get into in the book. It's a situation where women have this expectation of everything. And they want men who combine brains, so sperm banks recruited top colleges, and height and good health history and good looks. And so there's an intense choosiness. But the "brainiacness" isn't something that many women put above all else. Schulz: In the book you wrestled with the ethics of being a middleman, in a sense of meddling in people's family lives where you are connecting donors and their children and their wives and mothers involved -- sort of intimate family lives. But with the nature of the investigation, you had to do this. You pointed out that there were some good experiences, which were positive ones, and then also had some that were a little more emotionally traumatic. Now that the book is done and is out are there any of these families that you think would have been better off without learning more about the donor dads or the kids? or Plotz: Once they began the process of wanting to know and thinking they could know, I think they were better off knowing. Once it became a question that they thought could be answered, just having the answer meant a lot. Otherwise they were going to live with uncertainty and doubt. And even when the certainty is not what you hoped for, like with "Alton," "Tom's" half brother, who was so put off by what donor "Coral" was that he never even met him, I do think, even in that case, the knowledge has been valuable to him. He is glad to know. And when you compare his feelings or "Tom's" feelings or the feelings of other children I know who haven't gotten to meet their donor father, I think there is a kind of closure with them. And then also I think there may be a tendency to underestimate how much learning about your biological father will affect you. If you discover your biological father is a disappointment, it doesn't make you feel (you are) therefore a loser. You know who you are. It can be a sadness that you won't have a close relationship with him, but I don't think then that you assume well I am loser because he is a loser. I think people are sophisticated enough to separate themselves from this biological donor. I think, once you ask the question, "who is my father and can I know him?", then generally it's better to get some closure than not get closure. But I do wish that sometimes people didn't ask the question at all. I think had they never asked -- had "Tom" never known who he was, or had "Alton" never known that he had this biological father out there -- they would be just as happy and better off. They would have avoided this adventure, which in "Tom's" case he enjoyed a great deal of the adventure, but I don't thinks it's I don't know -- I don't know, it's a good question. I don't know. Schulz: There are plenty of children who were the products of sperm donation that don't know that their father, or the person they think is their father, is not their actual biological father, right? Plotz: Yes. In most cases, that is so. And, actually, I think there is a distance that exists between many children of donor-insemination and their non-biological fathers, when they don't know that their father is pretending to be their father. You hear lots of reports about families where the father always behaves in a distant way, even if he is pretending to be the father just because it's a very hard act to keep up. So, I think that's the problem. Increasingly, of course, single mothers and lesbian couples are the ones who are going to sperm banks. And in that case, of course, those kids know from day one that they have a biological father out there. And I think there is going to be a tremendous amount of curiosity. As a society right now, donor anonymity is the standard practice. And I think that we are going to have to grapple with this because there are going to be the kids. In a genetic age where we believe so much in who we are is determined by what our genes are, these kids are going to have a real expectation to know who their biological fathers are. And I think we haven't really thought about what the burden is going to be on them, and is their right to know their biological fathers more important than the donors' right to anonymity. And I actually I don't have a good answer. I do know that smarter people than me need to think about it. Schulz: Actually, that I had a question about that. On page 130 of the book, you discussed "genetic expectation" and whether it could be a blessing or a curse. I was wondering, after studying these people's lives, what do you think? Especially now, with the ongoing debates over biotechnology, there is this profound sense that this really matters a lot. Plotz: Right. Schulz: But it sounds like even after investigating all of this you are still not sure. Plotz: I am not sure. I think I am much more influenced by my own experience with my children. I think before you have your children, you have this theoretical notion of what your children will be like. And you build up expectations based on what you're like and what your wife is like and based on your genes, and you think, therefore, such and such will happen. And then presented with the actuality of what your child is, your child has these distinct abilities and talents and interests and personality which seem to be independent of you. You realize that the expectation is not fair. It's not fair and it's somewhat corrosive. Your job is to, once you have been presented with a real live child, help encourage that child's interest in the best way that you can. But not to assume that based on your genetic expectations that the child is a particular way. And so I come out of the project thinking that unless gene science gets a billion times better, where you can control what you can do with genes -- like actually pinpoint that this is the gene for playing the guitar really well - I think genetic expectations tend to be a problem. I also think that most parents, and there are obviously exceptions to this, have the same reaction I do. So even the parents of the Nobel Prize Sperm Bank, where people have gone intentionally seeking this intellectual enhancement, once presented with actual children, behaved as normal parents. I think they were more involved in their kids' lives, perhaps, than the average parents because they tended to be hyper-involved moms. But I don't think they had unrealistic goals set for their children. I think they pushed them, but I don't think they were nuts about it. So, I think most people, when once they face the actual child, suspend their theoretical notions about what the child should be. Schulz: In some ways, the book examined the interplay of science and business morality and politics and ethics. In investigating this and learning more about the histories of these families and how they dealt with the knowledge that came to them, did it in any way shape your views over other debates in biotechnology -- like stem cells or cloning -- where there are controversial genetic questions and about the future makeup of mankind? Plotz: Well, in one or maybe two related ways, it did. I think the remarkable thing about the sperm bank industry is that it's a wide-open free market. That's basically because both the right and left have been afraid to regulate fertility in this country in general. The right because it looks like business, capitalism, and so they want to let it go. And on the left because once you start regulating fertility it has all sorts of abortion implications that I don't think the left wants to deal with. And so as a result, sperm banking in particular has been a very wide-open free market where people who are not doctors were in it, and where it was driven by consumer behavior. And it's a case where medicine has really responded to consumers in very effective, competitive way, and where the market has allowed this wide variety of kinds of sperm banks and kinds of sperm donors and practices within sperm banking. And because anyone who goes to the sperm bank really wants to make sure that the donor is safe, HIV testing came into banks long before the federal government thought to even regulate it. The banks realized that if they didn't have safe men, they were going to be screwed. Schulz: Right. Plotz: And it's a great case study for how the market can work in that instance. I am not absolutely certain that it applies in lots of other aspects of medicine, but it certainly proves that in one particular aspect it does. And then I think it showed the up and down side of what happens when consumers drive medical choices. The great thing about the consumer driving it is that the banks are responsive and the fertility industry in general has gone from being this completely top down, doctor-dictated, very unpleasant process to being one which is patient driven. Certainly patients are desperate, but it's the patient pushing the doctors to do what they want rather than the doctors pushing the patient. The downside is that because people are so emotionally desperate about wanting children, they may behave in ways that are slightly less rational than you would hope they would. The upside is that there is this great flourishing of ideas and experimentation and efforts to try things, and fertility research has advanced an enormous amount. The kind of opportunities available to infertile couples has exploded basically because the market just let it. And so that's one lesson that I learned out of it. Schulz: I don't want to mis-characterize it, but I would say that one of the more un-lovely elements of this whole thing that you uncovered was some donors that who had "greed to breed"-- you call them the inseminators. Plotz: Right. Schulz: Describe what that is and what you learned about it and what, in your view, makes them tick. Plotz: Sure. I came across several guys who had a psychological make up that really surprised me. It was scary. These guys had been "volunteer" donors to the Nobel Prize Sperm Bank. They had gone to the Nobel Prize Sperm Bank and convinced Robert Graham that they were intellectually gifted enough and healthy enough to be genius sperm donors. And these were guys -- there about four of them -- who went from sperm bank to sperm bank donating or who went to a few sperm banks and also then in their own private lives had fathered many, many children. I mean many, many children. And they were totally opportunistic breeders. I mean, they would hook up with women, get them pregnant and have children and then once the women stopped wanting to have children, they would split. The guys would just abandon the women. And then they would also go to these sperm banks in the hopes of getting their seed out there. And the most charitable explanation is that it is Darwinian. They are competing in a Darwinian in game, and they are just getting an advantage up on other guys and ha-ha-ha. The less charitable interpretation, which is what some psychologists gave to me, is that these guys are basically some sort of sociopaths, where they fundamentally don't recognize that these children are humans or that the women they are impregnating are humans. It's just a kind of obsessive push to advance their own egos. It's just pure obsessive ego. It's an alarming personality type. Really weird. And, of course, it's alarming because here are the guys who are least fit. They are the guys you would definitely not want to be breeding. But because they have this compulsion to breed and because sperm banks don't look at this particular behavior (I don't think they screen for this), these guys are able to pass themselves on in vast numbers. So one of the donors I dealt with had had about 50 children from eight different sperm banks and that's just frightening. Schulz: The story you investigated might seem to have some moral and political implications. In your view, are there any and if so, what are they? Plotz: Well, I think one we touched on earlier, which is what do we do with genetic expectations? I think one conclusion right now is the idea that you can make children what you want them to be. Right now, the science isn't there as a kind of psychological and emotional matter. We have to ease off the idea that we can program children. I do believe in genes and I certainly believe that there are genetic associations with intelligence and personality and so we can try to give our kids a little bit of a better chance at things, but this idea of programming children is so far away that parents really have to not even conceive of it. They just have to think that maybe I'm just getting a slight edge up in statistics because I'm doing something like this. That's one point. Another point which is one of the (and I'm not really sure whether it's an explicit or implicit) conclusions of the book -- the first question that anyone ever asks me is: "Are any of these kids geniuses?" And I always say, "Well it's a non-random sample of these kids, about 35 of them. And there is a wide variety. Some are better than average students, some are wonderful athletes, some are not. In general, they do seem to be above average. There are some quite amazing kids among them." However, what I look at is what these mothers are like. Their mothers are so involved and intent on raising accomplished children that they would have raised accomplished children if they had gone to the Nobel Prize Sperm Bank or they had gone to Joe's Discount Sperm Warehouse. So, I think the other lesson is that with the science as unsophisticated and as crude as it is today -- and for at least the future I can foresee -- it's nurture that matters so much. These kids are thriving, but I think the kids are thriving not just because they have good genes, but because they have mothers and some cases fathers who are really intent on raising them well. And so another lesson is that you can't step away from parental responsibility by saying that "we have good genes." So much of it comes from what we as parents to do with our children and not the DNA we gave them. Schulz: In your passion for researching the story, you, too, went through the process of donating in a sperm bank -- although you did not make your genetic material available for others to use. Now, obviously you are married. You have a couple of kids, and that would obviously influence any decision you would ever make about this sort of thing. But imagine you aren't married or a father. After having gone all through this, is there anything about it that interests you in being a donor? Could you envision ever doing it and actually having kids that you didn't know? Plotz: Well, when I went into the process, I thought there was nothing that is the feeling about this. One, it was embarrassing; two, it was inconvenient; three, you create these children for whom you both have responsibility and have no responsibility. As a person who strives and take responsibility for the things I do, I found that all unsettling. However, the process of going through it and the seduction of the sperm bank where you are fawned over and petted and admired for your astonishing health history and your academic accomplishments. And then, when you finally give them the sample and they say, "Oh yes, the average man's sperm count is 50 or 60 million and yours is 105 million," there is this kind of reptilian thrill. I mean, it's this junior high locker room or something. You know that feeling of "what were my SAT scores" kind of gloating. And even though the number of sperm and the sperm sample have no moral content at all, the way they sell it to you, it makes you feel like, "I am special. And I really have something great here and maybe I should share it." I think the combination of that male ego, plus the effect that you can make a little bit of money, plus the altruistic nature of it (at least, that's the way they pitch it, in this altruistic way that you are doing a service for people that need it), make it somewhat. No, I didn't reach the point where I would ever want to do it, but I did reach the point of understanding some of the really seductive aspects of it. Schulz: Some of the donors seemed perfectly happy to make a donation and then they would never think about it ever again. And obviously the double blind nature of the process and the anonymity of it makes that easy, but some of them, even after having gone through it, clearly they thought about it from time to time. And some of them thought about it fairly frequently. Plotz: Yes. Schulz: Some of them thought about their kids, and what they are doing. If you were to go through this, how do you think you would be? Do you have some insight into what makes somebody think one way or the other? Plotz: I think it would probably be weighing on you. First of all, part of it's also the stage of life in which you become a donor. The Nobel Prize Sperm Bank was recruiting guys who were rather older than the usual run of sperm donors. They were guys who are more seasoned. The average sperm bank recruits college kids. And the college kids, and this is true from the donors I have talked to who are young, don't think about the consequences of what they have done. It doesn't really weigh on them. It's for beer money. And so they don't sit and think about who these children are. I think the guys who are older, two don't have children of their own for some reason, and three of a slightly obsessive nature to begin with are the ones who end up really thinking about who these kids are. I think that what will happen is that as these college kid sperm donors get older, if they either have children of their own or are unable to have children of their own, it may start to weigh on them. For example, if you look at the main website called "the Donor Sibling Registry," which is where people looking for siblings for their donor insemination kids and also people looking for the donors themselves, and donors looking for children. They all post to it. If you look at the ratio of kid to donor posts, its a gazillion to one. Basically, no donors are out there looking for their kids. A very, very small number are out there looking for their kids. Whereas lots of mothers and children are out there looking for donors. I think it's because for the most part the people posting on this board have young donors and they are the mothers of very young kids. And the guys I'm dealing with didn't face up to what it means to be a donor until they were in their 40s. Most American sperm donors are still not even in their 30s probably. Schulz: How did this investigation shape your views of family life and human wants and needs? I am assuming you may have gone into it with some preconceived notions about some things or maybe some things that you hadn't thought about at all. But what are some things that have been revealed to you? Plotz: I didn't go in particularly thinking about this because I went in with it being an intellectual project rather than an emotional project. What I realized once these people started contacting me about wanting to meet donors and wanting to meet children was that there is this universal need for family, a universal need for a sense of personal identity, that's overwhelming. I think the children I was dealing with weren't exactly looking for fathers to come and watch them at their baseball games. They didn't really expect that the donors would turn out to be that. But they were looking for some kind of connection with these men, and some kind of sense that they belonged together. And so I think I came away just recognizing how unbelievably powerful at every level this need to know who you are and connect with the people that made you is. And this is not to diminish the non-biological fathers at all. I think that these children already have this connection to their non-biological fathers because the non-biological fathers are their fathers. But no matter what, there is this sense that people need to really know who they are. Schulz: Well David, thanks. This is the most enjoyable book I have read in a long time. It's such a fantastic piece of reporting, and you should be really proud. Plotz: Thank you. References 26. http://www.amazon.com/exec/obidos/tg/sim-explorer/explore-items/-/1400061245/0/101/1/none/purchase/ref%3Dpd%5Fsxp%5Fr0/103-1572032-4464616 27. mailto:interview at techcentralstation.com 28. http://www2.techcentralstation.com/1051/feedback.jsp?CID=1051-071805C From checker at panix.com Tue Jul 19 19:30:10 2005 From: checker at panix.com (Premise Checker) Date: Tue, 19 Jul 2005 15:30:10 -0400 (EDT) Subject: [Paleopsych] BBC: New life in Africa for old PCs Message-ID: New life in Africa for old PCs http://news.bbc.co.uk/go/pr/fr/-/2/hi/programmes/click_online/4685645.stm Published: 2005/07/15 14:55:00 GMT By Dan Simmons Reporter, BBC Click Online Computers donated from the developed world are both fuelling and feeding an appetite for computers in Africa where a new machine could cost more than a year's wages. The Masai Mara seems like the middle of nowhere; great expanses of land spread out in all directions. The Masai are famous the world over for their exuberant dancing, but now you are just as likely to find them sat in front of a PC monitor. Five months ago Kilgoris Secondary School was one of the first within 100km to get computers. For the Masai schoolchildren that was the first time they ever used a PC. This equipment is simply too expensive for most school budgets, and while a Kenyan minister officially opened this computer lab, his government did not provide the computers. The story of how thousands of Africans are learning technology skills starts at a lock-up warehouse in North London. It is the home of Computer Aid International, which gives old PCs a new life in developing nations. The man behind the project receives more than 2,000 computers a month, many of which might otherwise have been thrown away. We've reached the point where we've got more computers than we have organisations to distribute them to. Tony Roberts, head of Computer Aid International "We get donations of literally hundreds of computers at a time from universities, large corporations, right the way through to individual donations of a single machine that someone's brought from their home," said Tony Roberts, head of Computer Aid International. High spec "All of those computers are extensively tested, cleaned and professionally refurbished here in the workshop. "We select only the highest specification machines that we know are going to be working for another three or four years, and those are the machines that we provide to organisations overseas. "We've reached the point where we've got more computers than we have organisations to distribute them to, so our priority is to identify new organisations in developing countries that can receive and distribute high volumes of computers. "But importantly, alongside that they need to be providing training and technical support, to make sure every computer that we send is made productive use of." At any one time up to 1,000 computers in the Computer Aid warehouse have been refurbished and are ready to go. Many go to Latin America and Eastern Europe, but the vast majority - eight out of 10 - are sent to Africa. Since Computer Aid started, more than 40,000 second-hand computers have been sent to African nations. One of Africa's two main distribution centres is in Nairobi. Here, they are checked over again and operating systems are installed. African charities then take on a supervisory role, promising to maintain the PCs and teach others how to use them. Many end up in outlying areas like Kilgoris in Kenya, thanks to a special school charity. 'Big achievement' "When schools receive computers I wish you could be there to see," said Tom Musili, from Computers For Schools Kenya. "The students are very happy and even the community comes to witness. It's a big achievement for a school, and if it's a school in one area you might find a migration of students from other schools without computers. "It's easier to get a job with knowledge of computers" Kilgoris Secondary School pupil Elizabeth Momanyi "So it has been received with very high enthusiasm." After-school computer lessons and clubs, which use the newly arrived PCs, are proving popular. "The computers really have improved the education technology because we no longer write in class, we come and put our notes into computers," said Sammy Okombe, a pupil at Kilgoris Secondary School. "It has reduced the time in class that is consumed with writing." Fellow pupil Elizabeth Momanyi said: "In terms of employment it has also improved a lot. After you finish High School it's easier to get a job with knowledge of computers." Tight security On the outskirts of the Nairobi lies Kibera, Africa's biggest slum that is home to 800,000 people. Some families go without food here to get their children into one of a handful of schools. It is not safe to house computers on the school site, so each Saturday a class leaves to walk to a security-patrolled classroom. On the way it is clear that even in the poorest areas people want to learn these skills. Adverts for computer courses are dotted around Nairobi, and in the slums too. The donated computers, which might well have been thrown away back in England, remain in the wrapping in which they were delivered, treated like the precious commodities they have now become. Pupils learn office software, such as Microsoft Word and Excel. At Nairobi's YMCA anyone can enrol for cut-price lessons on these imported second-hand computers. It is one of the few places where the computers have an internet connection. After four months of visiting the YMCA, Josephine Anyango got a job teaching others the skills she had learnt. She also found a little sideline too. "I've made letterheads, prospectuses, business cards, wedding cards," she said. Despite the high adult literacy rate in Kenya, unemployment here is high, so after the sacrifices have been made to get through school, competition for jobs is fierce. It has left many believing they can get a vital edge by pinning their hopes on these new skills. With many African governments unable to provide the technology their countries need, it is often left to charities to bring technology to the people. It is a piecemeal approach, but for the lucky few it is a valuable insight into Africa's future. Click Online is broadcast on BBC News 24: Saturday at 2030, Sunday at 0430 and 1630, and on Monday at 0030. A short version is also shown on BBC Two: Saturday at 0645 and BBC One: Sunday at 0730 . Also [1]BBC World . References 1. http://www.bbcworld.com/content/template_tvlistings.asp?pageid=668 From checker at panix.com Tue Jul 19 19:30:52 2005 From: checker at panix.com (Premise Checker) Date: Tue, 19 Jul 2005 15:30:52 -0400 (EDT) Subject: [Paleopsych] NYT: A Gene for Romance? So It Seems (Ask the Vole) Message-ID: A Gene for Romance? So It Seems (Ask the Vole) http://www.nytimes.com/2005/07/19/science/19gene.html?pagewanted=print By [3]NICHOLAS WADE Biologists have been making considerable progress in identifying members of a special class of genes - those that shape an animal's behavior toward others of its species. These social behavior genes promise to yield deep insights into how brains are constructed for certain complex tasks. Some 30 such genes have come to light so far, mostly in laboratory animals like roundworms, flies, mice and voles. Researchers often expect results from these creatures to apply fairly directly to people when the genes cause diseases like cancer. They are much more hesitant to extrapolate in the case of behavioral genes. Still, understanding the genetic basis of social behavior in animals is expected to cast some light on human behavior. Last month researchers reported on the role of such genes in the sexual behavior of both voles and fruit flies. One gene was long known to promote faithful pair bonding and good parental behavior in the male prairie vole. Researchers discovered how the gene is naturally modulated in a population of voles so as to produce a spectrum of behaviors from monogamy to polygamy, each of which may be advantageous in different ecological circumstances. The second gene, much studied by fruit fly biologists, is known to be involved in the male's elaborate suite of courtship behaviors. New research has established that a special feature of the gene, one that works differently in males and females, is all that is needed to induce the male's complex behavior. Social behavior genes present a particular puzzle since they involve neural circuits in the brain, often set off by some environmental cue to which the animal responds. Catherine Dulac of Harvard has found that the male mouse depends on pheromones, or air-borne hormones, to decide how to behave toward other mice. It detects the pheromones with the vomeronasal organ, an extra scent-detecting tissue in the nose. The male mouse's rule for dealing with strangers is simple - if it's male, attack it; if female, mate with it. But male mice that are genetically engineered to block the scent-detecting vomeronasal cells try to mate rather than attack invading males. The mice have other means - sound and sight - of recognizing male and female. But curiously, nature has placed the sex discrimination required for mating behavior under a separate neural circuit aroused through the vomeronasal organ. "It was very surprising for us," Dr. Dulac said. The gene that was eliminated from the mice is a low-level member of a presumably complex network that governs the inputs and outputs necessary for mating behavior. The most striking behavioral gene discovered so far is a very high level gene in the Drosophila fruit fly. The gene is called fruitless because when it is disrupted in males they lose interest in females and instead form mating chains with other males. The male's usual courtship behavior is pretty fancy for a little fly. He approaches the female, taps her with his forelegs, sings a song by vibrating his wing, licks her and curls his abdomen for mating. If she is impressed she slows down and accepts his proposal. If not, she buzzes her wings at him, a gesture that needs no translation. All these behaviors, researchers discovered several years ago, are controlled by the fruitless gene - fru for short - which is switched on in a specific set of neurons in the fly's brain. The gene is arranged in a series of blocks. Different combinations of blocks are chosen to make different protein products. The selection of blocks is controlled by a promoter, a region of DNA that lies near but outside the fru gene itself. So far four of these fru gene promoters have been found. Three work the same way in both male and female flies. But a fourth selects different blocks to be transcribed, making different proteins in males and in females. This difference, it seemed, was somehow the key to the whole suite of male courtship behaviors. Last month Barry J. Dickson of the Austrian Academy of Sciences provided an elegant proof of this idea by genetically engineering male flies to make the female version of the fruitless protein, and female flies to generate the male version. The male flies barely courted at all. But the female flies with the male form of fruitless aggressively pursued other females, performing all steps of male courtship except the last. How does the male form of the fruitless protein govern such a complex behavior? Dr. Dickson and his colleagues have found that the protein is produced in 21 clusters of neurons in the fly's brain. The neurons, probably connected in a circuit, presumably direct each step of courtship in a coordinated sequence. Surprisingly, female flies possess the same neuronal circuit. The presence of the male form of fruitless somehow activates the circuit , in ways that are still unknown. Fruitless serves as a master switch of behavior, just as other known genes serve as master switches for building an eye or other organs. Are behaviors and organs constructed in much the same way, each with a master switch gene that controls a network of lower level genes? Dr. Dickson writes that other such behavior switch genes may well exist but could have evaded detection because disrupting them - the geneticist's usual way of making genes reveal themselves - is lethal for the fly. (Complete loss of the fruitless gene is also lethal, and the gene was discovered through a lucky chance.) Though researchers like to focus on specific genes, they are learning that in behavior, an organism's genome is closely linked to its environment, and that there can be elaborate feedback between the two. Honeybees spend their first two to three weeks of adult life as nurses and then switch to jobs outside the hive as foragers for the remaining three weeks. If all foragers are removed from a hive, the nurse bees will sense the foragers' absence through a pheromone and assume their own foraging roles earlier. As the colony ages however, there are too few nurses, so some bees stay as nurses far longer than usual. Gene Robinson, a bee biologist at the University of Illinois, has found that a characteristic set of genes is switched on in the brains of nursing bees and another set in foraging bees. This is an effect of the bees' occupation, not of their age, since both the premature foragers and the elderly nurses have brain gene expression patterns matched to their jobs. Evidently the division of labor among bees in a hive is socially regulated through mechanisms that somehow activate different sets of genes in the bees' brains. A remarkable instance of genome-environment interaction has been discovered in the maternal behavior of rats. Pups that receive lots of licking and grooming from their mothers during the first week of life are less fearful in adulthood and more phlegmatic in response to stress than are pups that get less personal care. Last year, Michael J. Meaney and colleagues at McGill University in Montreal reported that a gene in the brain of the well-groomed pups is chemically modified during the grooming period and remains so throughout life. The modification makes the gene produce more of a product that damps down the brain's stress response. The system would allow the laid-back rats to transmit their behavior to their pups through the same good-grooming procedure, just as the stressed-out rat mothers transmit their fearfulness to their offspring. "Among mammals," Dr. Meaney and colleagues wrote in a report of their findings last year, "natural selection may have shaped offspring to respond to subtle variations in parental behavior as a forecast of the environmental conditions they will ultimately face once they become independent of the parent." A full understanding of these behavior genes would include being able to trace every cellular change, whether in a hormone or pheromone or signaling molecule, that led to activation of the gene and then all the effects that followed. Dr. Robinson has proposed the name "sociogenomics" for the idea of understanding social life in terms of the genes and signaling molecules that mediate them. The genes discovered so far mostly seem to act in different ways and it is hard to state any general rules about how behavior is governed. "It's early days and we don't have enough information to develop theories," Dr. Robinson said. A question of some interest is how far the genetic shaping of behavior exists in people. Larry J. Young of Emory University, who studies the social behavior of voles, said that, in people, activities like the suckling of babies, maternal behavior and sexual drives are likely to be shaped by genes, but that sexual drives are also modulated by experience. "The genes provide us the background of our general drives, and variations in these genes may explain various personality traits in humans, but ultimately our behavior is very much influenced by environmental factors," he said. Researchers can rigorously explore how behavioral genes operate in lower animals by performing tests that are impossible or unethical in people. "The problem with humans is that it is extremely difficult to prove anything," Dr. Dulac said. "Humans are just not a very good experimental system." From checker at panix.com Tue Jul 19 19:31:48 2005 From: checker at panix.com (Premise Checker) Date: Tue, 19 Jul 2005 15:31:48 -0400 (EDT) Subject: [Paleopsych] NYT: A Skeleton Moves From the Courts to the Laboratory Message-ID: A Skeleton Moves From the Courts to the Laboratory http://www.nytimes.com/2005/07/19/science/19skul.html By TIMOTHY EGAN SEATTLE, July 18 - The bones, more than 350 pieces, were laid out on a bed of sand, a human jigsaw with ancient resonance. Head to toe, one of the oldest and best-preserved sets of remains ever discovered in North America was ready to give up its secrets. After waiting 9 years to get a close look at Kennewick Man, the 9,000-year-old skeleton that was found on the banks of the Columbia River in 1996 and quickly became a fossil celebrity, a team of scientists spent 10 days this month examining it. They looked at teeth, bones and plaque to determine how he lived, what he ate and how he died. They studied soil sedimentation and bone calcium for clues to whether he was ritually buried, or died in the place where he was found. They measured the skull, and produced a new model that looks vastly different from an earlier version. And while they were cautious about announcing any sweeping conclusions regarding a set of remains that has already prompted much new thinking on the origins of the first Americans, the team members said the skeleton was proving to be even more of a scientific find than they had expected. "I have looked at thousands of skeletons and this is one of the most intact, most fascinating, most important I have ever seen," said Douglas W. Owsley, a forensic anthropologist from the Smithsonian Institution's National Museum of Natural History. "It's the type of skeleton that comes along once in a lifetime." He said the initial job of the team was to "listen to the bones," and the atmosphere, judging from the excitement of the scientists as they discussed their work, was electric. Dr. Owsley said answers to the big questions about Kennewick Man - where he fits in the migratory patterns of early Americans, his age at the time of death, what type of culture he belonged to - will come in time, after future examinations. "But based on what we've seen so far, this has exceeded my expectations," said Dr. Owsley, leader of the 11-member team and one of the scientists who sued the government for access to the bones. "This will continue to change and enhance our view of early Americans." In preparation for the initial examination, the hip and skull were flown to Chicago, where they went through high-resolution CT scans, much more detailed than hospital scans. Those three-dimensional pictures were used to produce casts and replicas of the bones. For now, the team has finished what amounts to a sort of autopsy, with added value. To that end the examination, which took place under extraordinary circumstances at the Burke Museum of Natural History and Culture at the University of Washington, was aided by a forensic anthropologist, Hugh Berryman of Nashville, who often assists in criminal investigations. "This is real old C.S.I.," said Dr. Berryman, referring to the crime scene investigations that inspired the hit television shows. The skeleton caused a furor from the time of discovery, making waves far beyond the academic realm, after an examining anthropologist said it appeared to have "Caucasoid" features. One reconstruction made Kennewick Man look like Patrick Stewart, the actor who played Capt. Jean-Luc Picard in "Star Trek: The Next Generation." American Indian tribes in the desert of the Columbia River Basin claimed the man as one of their own, calling him the Ancient One. The tribes planned to close off further examination and to bury the remains, in accordance with a federal law that says the government must turn over Indian remains to native groups that can claim affiliation with them. A group of scientists sued, setting off a legal battle, while the bones remained in the custody of the Army Corps of Engineers. In 2002, a federal magistrate, John Jelderks of Portland, Ore., ruled that there was little evidence to support the idea that Kennewick "is related to any identifiable group or culture, and the culture to which he belonged may have died out thousands of years ago." The ruling, backed by a federal appeals court last year, cleared the way for the scientists to begin their study. After being dragged into the culture wars, Kennewick Man remains a delicate subject - something that was clear in how the examining scientists parsed their descriptions of the skull at the end of 10 days of study. David Hunt, an anthropologist at the Smithsonian who was instrumental in remodeling the skull, said he was sure there would be criticism of his reproduction, but he said it was based on the latest and most precise measurements of the head. He said it was accurate to within less than a hundredth of an inch. Standing by the translucent model inside the Burke, Dr. Hunt said, "I see features that are similar to other Paleo Indians," referring to remains older than 7,000 years that have been found in North America. But his colleague at the Smithsonian Dr. Owsley said that term was imprecise. "It should be Paleo-American," Dr. Owsley said. "These bones are very different from what you see in Native American skeletons." Earlier, other anthropologists said that Kennewick Man most resembled the Ainu, aboriginal people from northern Japan. The scientists who examined Kennewick Man this month did not dispute that designation, but they said fresh DNA testing, carbon dating and further examinations would give them more accurate information. Earlier DNA testing, done during the court cases, failed to turn up matches with contemporary cultures. One key to Kennewick Man's life and times will be the stone spear point that was found embedded in his hip bone. Dr. Owsley said it was clear that the man did not die of the projectile, which had been snapped off. "This was a healed-over wound," he said. But the spear point, which was made of basalt, will be the guiding clue as anthropologists seek a match to other cultures. Kennewick Man's discovery brought fresh vigor to the discussion over how the Americas were inhabited. Earlier theories held that people crossed a land bridge between Siberia and Alaska. But Kennewick Man, along with a few other findings, suggested that there were waves of migration by different people, some possibly by boat. The scientists who examined the skeleton, and their supporters, still fear that a political move could cut off future study. On behalf of several tribes, Senator John McCain, Republican of Arizona and chairman of the committee that controls Indians affairs, has introduced an amendment to the law the governs custody of ancient remains. His proposed change would broaden the definition of Native American remains, expanding it to well into the past. Indians say such a change is needed to protect ancient ancestors, while others say it will make it nearly impossible to study ancient remains, even if they have little or no connection to present tribes. But as the scientists finished their 10-day study of Kennewick Man, with plans to report the results in October, the politics for once seemed to take a back seat to the giddiness of discovery. "This is like an extraordinary rare book," Dr. Berryman said, "and we're reading it one page at a time." From checker at panix.com Tue Jul 19 19:31:52 2005 From: checker at panix.com (Premise Checker) Date: Tue, 19 Jul 2005 15:31:52 -0400 (EDT) Subject: [Paleopsych] NYT: How Linguists and Missionaries Share a Bible of 6, 912 Languages Message-ID: How Linguists and Missionaries Share a Bible of 6,912 Languages http://www.nytimes.com/2005/07/19/science/19lang.html By MICHAEL ERARD Among the facts in the new edition of Ethnologue, a sprawling compendium of the world's languages, are that 119 of them are sign languages for the deaf and that 497 are nearly extinct. Only one artificial language has native speakers. (Yes, it's Esperanto.) Most languages have fewer than a million speakers, and the most linguistically diverse nation on the planet is Papua New Guinea. The least diverse? Haiti. Opening the 1,200-page book at random, one can read about Garo, spoken by 102,000 people in Bangladesh and 575,000 in India, which is written with the Roman alphabet, or about Bernde, spoken by 2,000 people in Chad. Ethnologue, which began as a 40-language guide for Christian missionaries in 1951, has grown so comprehensive it is a source for academics and governments, and the occasional game show. Though its unusual history draws some criticism among secular linguists, the Ethnologue is also praised for its breadth. "If I'm teaching field methods and a student says I'm a speaker of X, I go look it up in Ethnologue," said Tony Woodbury, linguistics chairman at the University of Texas. "To locate a language geographically, to locate it in the language family it belongs to, Ethnologue is the one-stop place to look." Yet Ethnologue's most curious fact highlights a quandary that has long perplexed linguists: how many languages are spoken on the planet? Estimates have ranged from 3,000 to 10,000, but Ethnologue confidently counts 6,912 languages. Curiously, this edition adds 103 languages to the 6,809 that were listed in its 2000 edition - at a time when linguists are making dire predictions that hundreds of languages will soon become extinct. "I occasionally note in my comments to the press," said Nicholas Ostler, the president of the Foundation for Endangered Languages, "the irony that Ethnologue's total count of known languages keeps going up with each four-yearly edition, even as we solemnly intone the factoid that a language dies out every two weeks." This dissonance points to a more basic problem. "There's no actual number of languages," said Merritt Ruhlen, a linguist at Stanford whose own count is "around" 4,580. "It kind of depends on how one defines dialects and languages." The linguists behind the Ethnologue agree that the distinctions can be indistinct. "We tend to see languages as basically marbles, and we're trying to get all the marbles in our bag and count how many marbles we have," said M. Paul Lewis, a linguist who manages the Ethnologue database ([3]www.ethnologue.com) and will edit the 16th edition. "Language is a lot more like oatmeal, where there are some clearly defined units but it's very fuzzy around the edges." The Yiddish linguist Max Weinrich once famously said, "A shprakh iz a dialekt mit an armey un a flot" (or "a language is a dialect with an army and a navy"). To Ethnologue, and to the language research organization that produces it, S.I.L. International, a language is a dialect that needs its literature, including a Bible. Based in Dallas, S.I.L. (which stands for Summer Institute of Linguistics) trains missionaries to be linguists, sending them to learn local languages, design alphabets for unwritten languages and introduce literacy. Before they begin translating the Bible, they find out how many translations are needed by testing the degree to which speech varieties are mutually unintelligible. "The definition of language we use in the Ethnologue places a strong emphasis," said Dr. Lewis, "on the ability to intercommunicate as the test for splitting or joining." Thus, the fewer words from Dialect B that a speaker of Dialect A can understand, the more likely S.I.L. linguists will say that A and B need two Bibles, not one. The entry for the Chadian language of Bernde, for example, rates its similarity to its six neighboring languages from 47 to 73 percent. Above 70 percent, two varieties will typically be called dialects of the same language. However, such tests are not always clear-cut. Unintelligible dialects are sometimes combined into one language if they share a literature or other cultural heritage. And the reverse can be true, as in the case of Danish and Norwegian. In Guatemala, Ethnologue counts 54 living languages, while other linguists, some of them native Mayan speakers, count 18. Yet undercounting can be just as political as overcounting. Colette Grinevald, a specialist in Latin American languages at Lumi?re University in Lyon, France, notes that the modern Maya political movement wants to unite under one language, Kaqkchikel. "They don't want that division of their language into 24 languages," she said. "They want to create a standard called Kaqkchikel." Beyond its political implications, the Ethnologue also carries the weight of a religious mission. The project was founded by Richard Pittman, a missionary who thought other missionaries needed better information about which languages lacked a Bible. The first version appeared in 1951, 10 mimeographed pages that described 40 languages. "Hardly anyone knew about the Ethnologue back then," said Barbara Grimes, who edited the survey from 1967 to 2000. "It was a good idea, but it wasn't very impressive." In 1971, Ms. Grimes and her husband, Joseph Grimes, a linguistics professor at Cornell, extended the survey from small languages to all languages in the world. What emerged was just how daunting a global Bible translation project was. "In 1950, when we joined S.I.L., we were telling each other, maybe there are about 1,000 languages, but nobody really knew," Ms. Grimes said. In 1969, Ethnologue listed 4,493 languages; in 1992, the number had risen to 6,528 and by 2000 it stood at 6,809. The number will probably continue to rise - 2,694 languages still need to be studied in detail, and in 2000, S.I.L. officials projected that at the current rate of work, a complete survey would not be completed until 2075. (They now say they are working to speed it up.) As for their goal of translating the Bible, Ethnologue's figures show that all or some of it is available in 2,422 languages. Ethnologue lists 414 languages as nearly extinct in 2000, a figure that rises to 497 in the new edition. However, a few linguists accuse the publisher of promoting the trends it says it want to prevent. Denny Moore, a linguist with the Goeldi Museum in Bel?m, Brazil, said via e-mail: "It is absurd to think of S.I.L. as an agency of preservation, when they do just the opposite. Note that along with the extermination of native religion, all the ceremonial speech forms, songs, music and art associated with the religion disappear too." Dr. Moore, who won a MacArthur "genius" grant in 1999 for his 18 years of linguistic work in Brazil, adds: "There is no way to resolve this contradiction. The only options are fooling yourself about it or not." S.I.L. officials say missionaries are giving another option to people who are already experiencing cultural shift. "The charge of destroying cultures has been around for a long time," said Carol Dowsett, a spokeswoman for the publisher. "Basically we're interested in people, and we're interested in helping them however we can." Though the Ethnologue is intended to help spread the word of God, it is being mined for more secular reasons. Computer companies that are developing multilingual software for foreign markets turn to the Ethnologue. "You've got a developer in Silicon Valley, and a person in the field calls them and says, 'We need to provide support for Serbian' or some language the developer's never heard of, so they can pop open the Ethnologue and find out, 'What is this thing?' " says Peter Constable, a former S.I.L. linguist who now works at Microsoft. Ray Gordon, the editor, says producers of "Who Wants to Be a Millionaire" once contacted him, and according to Brian Homoleski, the manager of the publisher's bookstore, several copies were bought after the Sept. 11 attacks by "a U.S. government agency." According to S.I.L. staff members, the American Bar Association, the Los Angeles Police Department, the New York Olympic Committee and AT&T all called for help. Ethnologue's newest step toward worldwide influence has been in the arcane world of the International Organization of Standards. The survey assigns a three-letter code to each language (English is "eng"), and the 7,000-plus codes (for living and dead languages) is near acceptance in library indexing and multilingual software standards. The codes also form the backbone of the Open Language Archives Community, a Web-based technical infrastructure. Most linguists are unfazed at S.I.L.'s affiliations. "If you took away all the literature done by the S.I.L. people done in the last 60 years," said Dr. Ruhlen of Stanford, "you'd be taking away a lot of language documentation for a lot of languages for which there's nothing at all." From checker at panix.com Tue Jul 19 19:31:56 2005 From: checker at panix.com (Premise Checker) Date: Tue, 19 Jul 2005 15:31:56 -0400 (EDT) Subject: [Paleopsych] NYT: Hunting for Life in Specks of Cosmic Dust Message-ID: Hunting for Life in Specks of Cosmic Dust http://www.nytimes.com/2005/07/19/science/space/19essa.html By DENNIS OVERBYE In astronomy, the race is on to the bottom. Teams of astronomers are staying up all night in the breath-fogging cold of the high-altitude desert of Chile and in the oxygen-starved heights of Hawaiian volcanoes, deciphering downloaded pixels from the Hubble and Spitzer Space Telescopes over soggy pizza, and then upstaging one another's news conferences, all in the search for the smallest, dimmest crumbs of creation, the most mundane specks of dust that may be circling some garden-variety star. It is here, in boring, peaceful meadows of the galaxy, far from fountains of lethal high energy particles, swarms of killer comets or hungry black holes, we are told, that we should look if we want to find habitable abodes and possibly life. And that, of course, would be the most exciting and wonderful result in the history of science, one of the few in astronomy that would probably rebound beyond science, affecting our view of our own status as tenants in this strange house of stars. Last spring the quest ratcheted another notch downward (or upward) when a team of astronomers announced the detection of a planet only seven times the mass of the Earth circling a dim star named Gliese 876 in the constellation Aquarius. This was the first alien planet that astronomers were unabashedly able to identify as a ball of rock, like the Earth, rather than a bag of gas like Jupiter or Neptune. Its discoverers estimated that the new planet was made of iron and silicate and was about 70 percent larger in diameter than Earth. Moreover, as in our own solar system, there are larger Jupiter-size planets orbiting Gliese 876 at greater distances. Never mind for the moment that it was so close to its home star, Gliese 876, that you could bake a lasagna on its surface. The planet was hailed as yet another sign that the cosmos was basically friendly and that sooner or later planet hunters would find worlds as small as Earth out there, another step on the road to finding out whether or not humanity is alone in the universe. "We are beginning to find our planet's kith and kin among the stars," said Geoffrey Marcy of the University of California, Berkeley, leader of the team that discovered the Gliese planet. And the joy of family reunion has resounded throughout the cosmos of astronomers for more than 10 years now. It was on the night of July 4, 1995, that Michel Mayor and his student Didier Queloz woke up their wives at 4 a.m. to drink Champagne and eat raspberry pie at an observatory in the south of France. The astronomers, based at the University of Geneva, had just confirmed that an invisible object about half the mass of Jupiter was sailing around the star 51 Pegasi, tugging it to and fro every four days. It was the first planet ever discovered around another Sun-like star. Dr. Mayor and his student were using a humble little-used reflector a mere 76 inches in diameter, way small compared with the 320-inch behemoths then being planned and built for cosmology. Their rivals, Dr. Marcy and Paul Butler, professors at San Francisco State, had to make do with similarly unglamorous circumstances at Lick Observatory. "We were typically assigned only two nights per year, exactly when the Moon was full and no one wanted the telescopes," Dr. Marcy recalled in an e-mail message. Most of the 150 so-called exoplanets subsequently discovered have been found using the "wobble" technique that Dr. Mayor's group and Dr. Marcy's group pioneered. This consists of looking for a to-and-fro motion in the star, induced by the gravitational tug of an orbiting planet. In retrospect, it seems only natural that the first planetary systems these astronomers discovered were psychotic beasts unlike anything previously imagined. The more massive a planet is and the more tightly it circles its star, the bigger the wobble and thus the easier it is to detect. As a result, the first planets were so-called "hot Jupiters," orbiting their suns in a matter of days instead of years, lethally searing and dense. As time has gone on and they gather more data on various systems, the observers have been able to detect smaller planets and ones that are farther and farther from their stars, an effect astronomers refer to as "drawing back the curtain." Last week astronomers announced the discovery of a planet with three suns, in a configuration the theorists had thought was unlikely, if not impossible. Dr. Marcy said in an interview that when the dust finally settled he expected that planetary systems with architectures like our own - with Jupiter-mass planets in circular outer orbits, leaving space for smaller planets in closer orbits protected from comet showers - would be rare, "but not that rare." Whether those planets will be suitable for life and intelligence is a different matter, however, and one that reaches beyond astronomy into metaphysics and theology. The requirements for Life As We Know It, some astronomers argue, are so exacting that Goldilocks planets like Earth might be rare or even nonexistent. The list of astronomical requirements for life gets longer and more exacting every year: the home star has to be far enough from the galactic center to be away from lethal black hole pyrotechnics, for example, but not so far into the galactic sticks that stellar evolution has not yet produced enough of the heavier elements like oxygen and iron needed for planets and life. Among other things, its planet has to have liquid water, a magnetic field to keep away cosmic rays, plate tectonics to keep things stirred, a giant outer planet to keep away comets and asteroids and perhaps a big moon to stabilize its rotation axis. Of the 200 billion or so stars in the galaxy, what fraction have the lucky combo to win this cosmic lottery? Faced with the same paltry data, different astronomers get vastly different conclusions, ranging from hundreds of thousands to one, namely our own. Among the various members of the planetary posse, Frank Drake, an astronomer at the SETI Institute and a pioneer of the practice of listening with radio telescopes for alien broadcasts, is one of the most optimistic. "It may well be that there are far more habitable planets orbiting M dwarfs than orbiting all other types of stars combined," he said on the institute's Web site recently, referring to the dim red stars like Gliese 876. The SETI Institute is holding a conference this week on the habitability of planets belonging to such stars. On the other hand are pessimists who argue that planets like the Earth and therefore even simple life forms are rare. One is Ben Zuckerman, an astronomer and exoplanet hunter at the University of California, Los Angeles, who admitted in an e-mail message, "Frankly, the correct answer remains anyone's guess, and the range of guesses is very wide indeed." But, he emphasized, the question can actually be answered by future spacegoing experiments like the Terrestrial Planet Finder and Kepler, which will find and count habitable planets in our corner of the galaxy. A null result would suggest that humans might be alone in the galaxy or the universe. This would merely be an interesting academic argument except for a film that is going around, and which I recently viewed, called "The Privileged Planet," which suggests that the Earth's nice qualities are no accident. The film, produced by Illustra Media in California, is based on a book of the same name by Guillermo Gonzalez, an astronomer at Iowa State, and Jay W. Richards, a philosopher and vice president of the Discovery Institute in Seattle. It argues that Earth is so special and unlikely that it must be the work of an intelligent designer. "What if it's not a cosmic lottery?" Dr. Richards asks in the film. The Discovery Institute advocates "intelligent design," a notion that posits the intervention by a designer, whether divine or not, in the origin and history of life, as an alternative to standard evolutionary biology. Illustra Media has produced a series of videos in support of this idea. The showing of the film at the Smithsonian Museum of Natural History last month exacerbated the worries of many astronomers that the Big Bang would be next on the hit list of creationists. Thoughtful cosmologists have long wondered about the apparent friendliness of the universe to carbon-based life forms like us. The notion that the fix must be in from a creator, however, has always been rejected as unscientific thinking. It's the job of scientists, after all, to pursue natural causes and explanations, not settle for supernatural ones. One such explanation for the specialness of the Earth, for example, comes from theories of modern particle physics and cosmology, which seem to suggest that there have been many, many Big Bangs resulting in a plethora of universes. We live in one that is suitable for us the same way that fish live in the sea. A prominent cardinal in the Catholic Church, Christoph Sch?nborn, recently criticized this idea, along with evolution, in a July 7 article on the Op-Ed page of The New York Times. He said the church needed to "defend human reason" against "scientific claims like neo-Darwinism and the multiverse hypothesis in cosmology invented to avoid the overwhelming evidence for purpose and design found in modern science." But the argument from design, many scientists say, is circular. Charles Stevenson, a planetary scientist at Caltech, said that it was no surprise that the Earth appears suited to our needs. "That's what Darwinian evolution tells us should happen. We are adapted to our world," Dr. Stevenson said. Who knows what powers atoms in their collective and complex majesty have to respond to their environments over time? Lacking anything approaching a final theory of physics, or of how planetary systems form, and of more than one example of life - the biosphere on Earth - scientists have no way of actually knowing how unlikely various properties of life and the universe are. In science the smart money is always on surprise. Everybody agrees that intelligent technological life is a much greater leap, but it might be instructive to consider who is laying down bets on at least looking for it. Among the financial angels of the search for extraterrestrial intelligence, or SETI, have been people like Paul Allen, the co-founder of Microsoft; the late Barney Oliver, William Hewlett and David Packard, leaders of Hewlett-Packard; Gordon Moore, the founder of Intel; and the novelist Arthur C. Clarke, who invented the idea of the communications satellite. The smart money isn't always right, but this is certainly smart money. From waluk at earthlink.net Wed Jul 20 02:39:12 2005 From: waluk at earthlink.net (Gerry) Date: Tue, 19 Jul 2005 19:39:12 -0700 Subject: [Paleopsych] threats in groups In-Reply-To: <14342610.1121799933391.JavaMail.root@wamui-andean.atl.sa.earthlink.net> References: <14342610.1121799933391.JavaMail.root@wamui-andean.atl.sa.earthlink.net> Message-ID: <42DDB950.6080903@earthlink.net> Doesn't this happen all the time whenever anyone wishes to lead a group? No group is ever 100% (or even majority) for a given idea. It seems to me that this is what's great about Democracy. The present time is what will rule the decisions. Gerry shovland at mindspring.com wrote: >Groups eventually recognize that someone in a leadership position >is acting contrary to the interests of the group, but not always >before a lot of damage is done, for example Enron. > >-----Original Message----- >From: Michael Christopher >Sent: Jul 18, 2005 10:10 PM >To: paleopsych at paleopsych.org >Subject: [Paleopsych] threats in groups > > > > >>>What events signal to individuals that the >>> >>> >functioning of their group may be compromised? Because >groups enhance individual success by providing members >with valuable resources, members should be attuned to >potential threats to group-level resources such as >territory, physical security, property, economic >standing, and the like.<< > >--Which brings up the issue of what happens when group >members who are threatening to the long term health of >the group are rewarded for a single contribution, as >in an abusive male rewarded for being able to punish >an enemy, or a member of a corporation who brings in a >lot of money while undermining the integrity of the >whole. Or a group that colludes with each other, >forgives each other's "sins" in order to stay powerful >as a group, to the detriment of the tribe. Only when >the tribe can meet the needs that corrupt members meet >in order to avoid punishment can dysfunctional groups >be dissolved. Otherwise you have people who are very >dysfunctional carrying the flag/bible, posturing as >defenders of the tribe to avoid being seen as a >problem themselves. > >michael > >__________________________________________________ >Do You Yahoo!? >Tired of spam? Yahoo! Mail has the best spam protection around >http://mail.yahoo.com >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > From shovland at mindspring.com Wed Jul 20 07:42:54 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Wed, 20 Jul 2005 09:42:54 +0200 (GMT+02:00) Subject: [Paleopsych] New America Project Message-ID: <200305.1121845374806.JavaMail.root@wamui-backed.atl.sa.earthlink.net> The NeoCons spent many years trying to sell their ideas. Bush I said no, and Clinton said no. Bush II said yes. Now they have had their chance, and it's not working. After 6 years, we are less secure, less prosperous, and more divided than we were when they started. I see no reason to assume that things will improve if we "stay the course." Most of us cannot wait years or decades to see if they are correct after all. We are facing a future where we will hit the limits of petroleum and water, and in which there will be fierce competition for all other natural resouces. Playing the global bully will only unite the world against us, and the situation in Iraq tells us that the cost-benefit ratio of waging war for resources is negative. We don't need to fight the NeoCons because they will collapse under the weight of their failed poliicies. We do need to build a vision to replace theirs, and I suggest the we call it the "New America Project." If in the future we cannot do well by conquering resources, we will have to do well by selling our ideas. There is no shortage of brainpower in America. Starting with basic needs for food, shelter, health care, transport, and communications, there is an abundance of ideas which we can implement to improve life all around the globe. If we need more ideas, we can bring them in from the void. It starts with intent. Steve Hovland Vienna, Austria July 20, 2005 From ljohnson at solution-consulting.com Wed Jul 20 12:16:32 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Wed, 20 Jul 2005 06:16:32 -0600 Subject: [Paleopsych] New America Project In-Reply-To: <200305.1121845374806.JavaMail.root@wamui-backed.atl.sa.earthlink.net> References: <200305.1121845374806.JavaMail.root@wamui-backed.atl.sa.earthlink.net> Message-ID: <42DE40A0.4040808@solution-consulting.com> The name needs work -- sounds too much like New American Century project, where the neo-con movement articulated the necessity of a new foreign policy, based on values instead of national intereat and realpolitik. Actually, Clinton strongly considered invading Iraq, but was derailed by his Rod of Power. Clinton was clearly a neocon in his foreign policy, witness Bosnia and Somalia. The difference was that he was too dependent on polls and chickened out - Mogadishu, Black Hawk Down - and cut and run, thus encouraging Osama Bin Laden to view americans as cowards and promoting the idea that they can attack us. Bush II seems to care nothing for polls and thus stays the course. Things are improving in Iraq, but are not reported. http://www.opinionjournal.com/extra/?id=110006964 The economy is strong, and unemployment is at 5% which historically is pretty good. The deficit is coming down as tax revenues go up. The deficit shrank by $100 billion this year. The Laffer Curve still works. http://www.opinionjournal.com/editorial/feature.html?id=110006973 Bush has replaced a moderate Supreme Court Justice with a hard-to-beat conservative, which will swing the court radically in the next few years, reversing the 5-4 O'Conner votes. Best wishes with your New America project, I say you have s steep road ahead of you. Socialism always fails, and free market capitalism in the context of a liberal (small L) rule-of-law democracy is a terrible system but better than all others. You will have to invent some pretty clever ideas. shovland at mindspring.com wrote: >The NeoCons spent many years trying to sell their ideas. > >Bush I said no, and Clinton said no. Bush II said yes. > >Now they have had their chance, and it's not working. > >After 6 years, we are less secure, less prosperous, and more divided than we were when they started. > >I see no reason to assume that things will improve if we "stay the course." Most of us cannot wait years or decades to see if they are correct after all. > >We are facing a future where we will hit the limits of petroleum and water, and in which there will be fierce competition for all other natural resouces. > >Playing the global bully will only unite the world against us, and the situation in Iraq tells us that the cost-benefit ratio of waging war for resources is negative. > >We don't need to fight the NeoCons because they will collapse under the weight of their failed poliicies. > >We do need to build a vision to replace theirs, and I suggest the we call it the "New America Project." > >If in the future we cannot do well by conquering resources, we will have to do well by selling our ideas. > >There is no shortage of brainpower in America. > >Starting with basic needs for food, shelter, health care, transport, and communications, there is an abundance of ideas which we can implement to improve life all around the globe. If we need more ideas, we can bring them in from the void. > >It starts with intent. > >Steve Hovland >Vienna, Austria >July 20, 2005 > > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Wed Jul 20 18:04:49 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Wed, 20 Jul 2005 20:04:49 +0200 (GMT+02:00) Subject: [Paleopsych] New America Project Message-ID: <27661426.1121882690049.JavaMail.root@wamui-andean.atl.sa.earthlink.net> An HTML attachment was scrubbed... URL: From anonymous_animus at yahoo.com Wed Jul 20 18:36:33 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Wed, 20 Jul 2005 11:36:33 -0700 (PDT) Subject: [Paleopsych] toxic groups In-Reply-To: <200507201800.j6KI0NR11119@tick.javien.com> Message-ID: <20050720183633.31449.qmail@web30806.mail.mud.yahoo.com> Steve says: >>Groups eventually recognize that someone in a leadership position is acting contrary to the interests of the group, but not always before a lot of damage is done, for example Enron.<< --True. The offending group often manages to throw a few scapegoats out to delay their own exposure as well. Then they have to start throwing out the "insiders" who get caught, which quickly unravels the group as paranoia and backstabbing sets in. Michael Michael C. Lockhart www.soulaquarium.net Blog: http://shallowreflections.blogspot.com/ Yahoo Messenger:anonymous_animus "We are stardust, we are golden, We are billion year old carbon, And we've got to get ourselves back to the garden." Joni Mitchell "Morality is doing what is right no matter what you are told. Religion is doing what you are told no matter what is right." - Unknown "The most dangerous things in the world are immense accumulations of human beings who are manipulated by only a few heads." - Jung __________________________________ Yahoo! Mail Stay connected, organized, and protected. Take the tour: http://tour.mail.yahoo.com/mailtour.html From checker at panix.com Wed Jul 20 19:55:04 2005 From: checker at panix.com (Premise Checker) Date: Wed, 20 Jul 2005 15:55:04 -0400 (EDT) Subject: [Paleopsych] H-N: Nigel Barber: Evolutionary Explanations for Societal Differences in Single Parenthood Message-ID: Nigel Barber: Evolutionary Explanations for Societal Differences in Single Parenthood http://human-nature.com/ep/articles/ep03142174.html 5.7.10 [Thanks to Laird for this.] Evolutionary Psychology 3: 142-174 Original Article Evolutionary Explanations for Societal Differences in Single Parenthood Nigel Barber, Ph.D.,70 Kent Street, Portland, ME 04102, USA. Abstract: The new research strategy presented in this paper, Evolutionary Social Science, is designed to bridge the gap between evolutionary psychology that operates from the evolutionary past and social science that is bounded by recent history. Its core assumptions are (1) that modern societies owe their character to an interaction of hunter-gatherer adaptations with the modern environment; (2) that changes in societies may reflect change in individuals; (3) that historical changes and cross-societal differences are due to the same adaptational mechanisms, and (4) that different social contexts (e.g., social status) modify psychological development through adaptive mechanisms. Preliminary research is reviewed concerning historical, societal, and cross-national variation in single parenthood as an illustration of the potential usefulness of this new approach. Its success at synthesizing the evidence demonstrates that the time frames of evolutionary explanation and recent history can be bridged. Keywords : Evolutionary Social Science; Evolutionary Psychology; Single Parenthood; Societal Differences; Historical Change; Adaptive Development; Sexual Development; Poverty; Values; Cultural Relativism; Sweden; England. _________________________________________________________________ Introduction Evolutionary psychology (EP) focuses on human adaptations to the hunter-gatherer way of life that is believed to have shaped human psychology over approximately two million years (Barkow, Cosmides, and Tooby, 1992; Buss, 1999; Cosmides, and Tooby, 1987; Durrant and Ellis, 2003). This approach generally identifies evolutionary influences on modern behavior in terms of cross-cultural universals such as proposed universal sex differences in sexual jealousy and mate selection criteria (Geary, 1998) but recognizes that universal human characteristics, such as emotions, may find different expression in different societies (Fessler, 2004). It sees social sciences as falling within the natural sciences. By contrast, "standard" social science focuses on the present and attempts to account for behavioral variation in terms of contemporary influences without reference to the evolutionary past (Lopreato, and Crippen, 1999). Although the strategy of identifying universals at the level of information processing mechanisms of the brain was an important point of departure in the emergence of evolutionary psychology, this approach requires elaboration if it is to account for variation in modern behavior. Just as the social sciences are stuck in the present, so to speak, evolutionary psychology is focused on the evolutionary past. Admittedly many evolutionary psychologists have wrestled with the problem of how one gets from evolved psychology to modern behavior using constructs that include cognitive modules, Darwinian algorithms, memes, and so forth (Barkow, Cosmides, and Tooby, 1992). The new research strategy of evolutionary social science (ESS, Barber, 2005) strives to overcome the temporal problem (i.e., bridging the evolutionary past and the present) by using concepts of evolutionary adaptation to account for variation in modern behavior whether between siblings, between families, or between societies. This paper employs the new research strategy to organize data concerning single parenthood in a way that can stimulate new research. Before analyzing societal variation in single parenthood, it must be acknowledged that this new approach makes many controversial assumptions. It would be helpful to make these assumptions explicit and to explain briefly why they are necessary. The paper then shows how these assumptions help to organize data concerning single parenthood in different societies and at various points in history. The Assumptions of ESS ESS confronts evolutionary novelties in human social behavior produced by modern environments and thus aims to unite the evolutionary frame of explanation used by evolutionary psychologists and others with the historical time frame of many social sciences. To this end, it is necessary to make assumptions that have not been made previously, or at least not in an explicit and systematic way, with the aim of uniting the time frames of evolution and recent history. Some of these assumptions are sufficiently complex, problematic, and even counter intuitive, that they require some elaboration. Assumption 1: That modern societies owe their character to an interaction of hunter-gatherer adaptations with modern ecologies and environments. This assumption is fairly uncontroversial. However, as previously noted, existing social sciences generally do not connect modern life with evolutionary adaptations and are quite resistant to doing so. Assumption 2: Changes in societies may be caused by changes within individuals and they can affect individuals via bottom-up phenomena rather than via top-down transmission of values or behaviors . This form of reduction is actively resisted in some social sciences but it is worth emphasizing that scientific explanations almost always proceed by accounting for complex events in terms of more elementary constituents. Thus, the "behavior" of a molecule is always reducible to the characteristics of the constituent atoms. A particularly interesting example of individual change mediating societal differences is the way that sexual liberation of women in a particular society is related to an adverse marriage market that means women's individual chances of contracting a favorable marriage is bleak, so that they must assert themselves in the monetary economy through paid employment or operation of businesses (Barber, 2002 a, 2004 a; Guttentag, and Secord, 1983). This phenomenon is by no means recent, cropping up in 14th-century England, and classical Sparta, for example. To say that social change in such cases is caused by forces acting at the individual level might seem like a semantic exercise given that the marriage market difficulties of females is distributed throughout the society but ESS opts to use individual-level explanations of social arrangements because these are theoretically relevant, viable, and scientifically plausible. Assumption 3: that historical changes and cross-societal differences are due to similar adaptational mechanisms. This assumption contradicts the argument of cultural relativism. This is not to deny that all societies have some unique features, such as the peculiarities of their language communication system, their forms of dress, body ornamentation, basketry, pottery design, and so forth. Rather, the argument is made that to the extent the phenomena are truly unique, they defy scientific explanation and are thus of minimal interest to scientists, as opposed to artists, for example. One practical ramification of Assumption 3 is that historical mechanisms can be studied indirectly through cross-societal comparisons of contemporary peoples. To take a simple example, the high fertility of women in Africa today is due to the same agricultural mode of production that supported the majority of American women a century ago, and was associated with high fertility for them also. Assumption 4: that different social contexts (e.g., social status) modify psychological development through adaptive mechanisms . This can be considered a general theory of psychological development that not only accounts for the adaptive match between individual behavior and the social environment, but also helps to explain historical, and cross-national societal differences. This assumption can be rephrased as an expectation that certain social inputs during development shall produce specific behavioral/psychological outcomes. For example, corporal punishment increases interpersonal aggression, helping to explain why parents in warlike societies are more likely to use harsh disciplinary tactics on their sons (Ember and Ember, 1994). Similarly, there is evidence that reproductive behavior, including single parenthood, is affected by childhood stressors. Childhood Stress, Divorce, and the Development of Reproductive Behavior Psychological stress in childhood influences adult sexual psychology and behavior in part because it alters brain development. Poverty is one example of a complex stressor in modern societies and researchers recently discovered that childhood stress alters brain structures and thus potentially modifies the sexual psychology of males and females (Teicher, Anderson, Polcari, Anderson, and Navalta, 2002). Brain biology is far from being the complete picture, of course, and marriage is greatly affected by the availability of suitable partners, for example. Whatever the underlying mechanisms, men raised in poverty are less likely to provide, and women are less likely to require, the emotional commitment and economic support for children that are characteristic of the marriage contract around the world, so that single parenthood is correlated with low income within a country. Poverty is not the only source of childhood stress, of course. If psychological stress affects sexual development and reproductive behavior in predictable ways, then other sources of childhood stress would be expected to have similar consequences for adult sexual behavior. Parental divorce is an interesting type of childhood stressor in this context because it is more of a middle-class experience in the U.S., for example, not because poor people enjoy stable marriage, but because they are considerably less likely to wed in the first place (Abrahamson, 1998). Although children of divorced parents experience a modest decline in living standards, they remain much better off, on average, than children raised from the beginning by single mothers (Waite and Gallagher, 2000). This means that divorce offers a useful window into the effects of psychological stress, unalloyed with extreme economic deprivation, on the development of sexual behavior. Wallerstein and Blakeslee (1996) concluded that most American children who experience a bitterly-fought parental divorce suffer lifelong problems in forming committed sexual relationships. Their conclusion is supported by the following data on children of divorced parents (Wallerstein, 1998): ? Females are approximately 50% more likely to give birth as teens. ? They are approximately 48% more likely to divorce themselves (60% for white women and 35% for white men). ? Their marriages may be either highly impulsive (particularly for females), or delayed due to lack of self-confidence and trust (particularly for males). About a quarter of children of divorced parents (24%) never marry compared to one in six (16%) for the general population, suggesting a lack of trust in intimate relationships. ? They suffer from emotional problems (e.g., depression, behavioral disorders, learning disabilities) at a rate that is two-and-a-half times that of the general population. Correcting the divorce rates by the marriage rates, it can be estimated that children of divorced parents have only about a one-in-five probability of being stably married, compared to a two-in-five chance for the general population (assuming a non-divorce rate of .50 multiplied by a marriage rate of .84). Compelling as such numerical differences are, they nevertheless minimize the relationship correlates of parental divorce because they leave out the emotional pain, anxiety, conflict, and self-doubt, that Wallerstein's informants described during lengthy interviews in the context of protracted longitudinal research. Even those who contributed to stable marriage statistics were often far from happy in their union. According to Wallerstein and Blakeslee (1996), the facade of marital permanence frequently concealed much discontent. Low expectations, combined with a sense of helplessness, often kept children of divorced parents in wrenchingly discordant marriages that more confident individuals might have changed, or exited. Evidently, conflict and unhappiness in the parental marriage creates an expectation in children that their own marriages may be discordant, or fail. Males and females often respond differently to parental conflict (Barber, 1998 a, b; Wallerstein and Blakeslee, 1996). Young women may react to parental friction and separation with precocious sexuality. They initiate sexual activity sooner, and may even reach sexual maturity earlier, compared to young women raised in intact marriages (Ellis, 2004; Ellis, Bates, Dodge, Fergus, Horwood, Petit, et al., 2003). These phenomena help to explain the higher rate of teen pregnancy and childbearing among children of divorced parents. Marriages are often early, and impetuous, as well. In the absence of a reasonable period of courtship in which the couple get to know each other, and conduct a protracted evaluation process, marriages are liable to be incompatible, and unstable. Early marriages are also more likely to end in divorce. While the young women may enter marriage recklessly, Wallerstein and Blakeslee (1996) describe a rather different type of commitment problem as characteristic of male children of divorced parents. These may experience lifelong difficulties in expressing, or even acknowledging, their emotions, which impedes sexual relationships and militates against happiness in a marriage. Many fear intimacy and postpone committed relationships (Barber, 1998a, b). Some children may feel so traumatized by parental divorce that they are inclined to postpone marital commitment (Wallerstein and Blakeslee, 1996) preferring to cohabit before marriage (Whitehead and Popenoe, 2002). For individuals who fear marital commitment, this might seem a sensible way of progressing to a more committed, more permanent relationship. Informal unions are highly unstable, however, (Smock, 2000) possibly because of the lack of commitment with which they begin (Waite and Gallagher, 2000). Wallerstein and Blakeslee (1996) serve rather like a Greek chorus in emphasizing the tribulations inflicted on children by parental divorce. By contrast Hetherington and Kelly (2002), serve as cheerleaders for children's powers of recovery following parental divorce. Hetherington collected data on some 1,400 families and their 2,500 children spanning three decades, focusing on objective facts rather than the more subjective interview techniques employed by Wallerstein on smaller samples. Hetherington found that the majority of children are resilient and bounce back from the distress of parental divorce in a few years without experiencing major behavioral or emotional problems. Hetherington's optimistic conclusions are summarized in a Time magazine interview (Corliss, 2002): "A lot of the current work makes it sound as if you've given your kids a terminal disease when they go through a divorce. I am not pro-divorce. I think people should work harder on their marriages: support each other and weather the rough spots. And divorce is a painful experience. I've never seen a victimless divorce - where the mother, father, or child, didn't suffer extreme distress when the family broke up. But 75% to 80% do recover." p 40 By "recovery" Hetherington means the absence of serious psychological, social, or emotional problems that would warrant professional attention. Given that 75% of children "recover" by this definition, 25% experience serious emotional problems, compared to 10% of children from intact two-parent families. In other words, their risk of serious emotional problems is more than doubled. In addition to those individuals with diagnosable psychological problems in the years immediately following parental divorce, many others could have serious lifelong problems in forming happy and committed reproductive relationships. These problems are at least partly attributed to the stress of parental divorce, although other environmental factors, such as social learning and inadequate opportunities to acquire social skills cannot be ruled out. Genetic influences may also matter. This point is most clearly established in research finding an association of the androgen receptor gene with aggression, impulsivity, and number of sexual partners, and parental divorce for both sexes as well as female age of menarche (Comings, Muhleman, Johnson, and MacMurray, 2002), although the effect sizes were modest. Yet, the problems of children of divorced parents are not just a product of inheriting "hostile" or "emotionally troubled" genes from parents. This conclusion emerges from behavior genetics research comparing outcomes for adopted children with those of biological children subsequent to parental divorce. Adoptees suffer more from emotional problems following parental divorce even though they share no genes with the divorcing parents (O'Connor, Caspi, DeFries, and Plomin, 2000). Quinlan's (2003) analysis of data from the National Survey of Family Growth also found that parental separation before the age of five years predicted early menarche, age of first pregnancy, and shorter duration of first marriage. Parental separation during adolescence was more strongly predictive of number of sex partners, however, suggesting that changes in care-taking arrangements have complex age-dependent effects on the development of sexual and reproductive behavior. If the stress of parental divorce and/or separation can have substantial effects on marital commitment in the second generation, it is not hard to imagine that the multiple stresses of poverty could have comparable effects on sexual behavior and marriage (see below). In summary, a stressful early childhood increases the probability of single parenthood because of the resulting difficulty in forming committed reproductive relationships. This is true of parental conflict surrounding divorce, but it may also be linked to childhood poverty, or other causes, thus implicating developmental changes in the brain. On the other hand, single parenthood may occur at high levels in societies where children are exceptionally well off, and do not have highly stressful childhoods, as is true of Sweden, for example, pointing to multiple causation. Nonmarital reproduction is a complex phenomenon that reflects the reproductive strategies, and sexual behavior, of both sexes. These are affected in interesting and complex ways by economic influences and marriage markets, as illustrated by research on the history of single parenthood. Poverty and the History of Single Parenthood Poverty can affect reproductive behavior in two different ways each suggesting adaptive design: through the effects of stressors on brain development; and through its effects on marital opportunity. There is abundant historical evidence that poverty was an important influence on single parenthood because of its limiting effects on marital opportunity due to scarcity of men who were economically qualified for marriage. Even today, depressed economic conditions around the world, and high male unemployment, occur in nations that have high ratios of nonmarital births (Barber, 2003 c). Historical evidence indicates that the reproductive practices of young people in respect to nonmarital childbearing were affected by economic circumstances (Abrahamson, 2000). Economic determinism is not the only possible explanation for historically changing single parenthood ratios, of course. Many social historians, believe that changes in single parenthood ratios are due to changing degrees of sexual liberation. Thus, the steady rise in single parenthood ratios for many European countries throughout much of the 19th century is attributed to increasing sexual liberation associated with industrialization of the economy and urbanization of the population. There is little doubt that changes in single parenthood ratios of this period were genuinely connected to the ongoing Industrial Revolution but appealing to sexual liberation as the cause falls short as a scientific explanation, particularly failing to explain historical changes in sexual attitudes, as explained in more detail below. The increase in single parenthood during the 19th-century period of industrialization may be illustrated by the case of France where single parenthood ratios rose from about 5% of all births at the beginning of the century to about 10% at its end (Shorter, 1975). The largest increase in single parenthood occurred in cities, such as Paris and Bordeaux, where illegitimacy ratios surged above 30%, comparable to the level seen in many modern cities. Other European cities manifested a similar rise in single parenthood, partly reflecting an increase in the number of young single women who migrated to cities and towns in response to job opportunities associated with urban development following the Industrial Revolution. There are many reasons why urbanization may increase single parenthood. Thus, living in an unfamiliar social environment, young women may have experienced difficulty in finding husbands. This problem was exacerbated by an excess of single women to single men (because young males were more likely to remain at home to work on family farms). The same phenomenon is still in evidence in modern cities where the feminine population generally exceeds the masculine one (Guttentag and Secord, 1983). Thus, for U.S. metropolitan areas, there are just 93 males over the age of 16 years per 100 females when people living in prisons, and other institutions, are excluded (Barber, 2002d). European single parenthood ratios increased during the 19th century, until about 1880, when a decline began that lasted for over two decades (Shorter, 1975). This decline was accompanied by a decrease in marital fertility, and both phenomena evidently reflect use of condoms, or other contraceptive devices, that became widespread about this time (Langford, 1991). The sharp and widespread increase in single parenthood following the industrial revolution is apparently without historical precedent and thus a challenge for historians as well as ESS. Many historians see increased single parenthood as a product of sexual liberation (or moral degeneration, depending on their perspective). According to the sexual liberation argument, urbanization brought large numbers of lustful young men and women together in an environment where the watchful eyes of relatives, and other traditional constraints on sexual behavior no longer mattered. They converted newfound sexual opportunity into sexual expression outside marriage thereby boosting illegitimate births. The sexual liberation interpretation may well describe changing patterns of sexual behavior but it is far from satisfying when judged by the criteria of a scientific explanation for those changes. One problem is circularity. Sexual liberation is defined by an increased probability of sex outside marriage. For much of the 19th century, prior to widespread use of contraceptives, increased extramarital sexuality produced an inevitable rise in single parenthood. (It is true that premarital conceptions could be, and often were, legitimized, by marriage, however). If such complications are set aside, attributing increased ratios of single parenthood to sexual liberation is largely an exercise in circular reasoning. If we did not have data on single parenthood, we might not know that sexual behavior was "liberated." Other clues of such trends may be uncovered by historians, of course, including explicit depictions of sexual behavior in the arts and literature, or an increase in tax revenues from prostitution, but such measures of sexual liberation often lack the consistency and validity of the illegitimacy ratio itself. Strictly speaking, scientific explanation requires that the explanatory variable be measured independently of what is being explained, a criterion that is often lacking in social research. Yet, it is disputable whether sexual liberation can be reliably measured in historical research without referring to the illegitimacy ratio. If sexual liberation cannot be separated from single parenthood, then one phenomenon cannot be used as a scientific explanation of the other: they are not independent. Explaining one in terms of the other is thus an exercise in circular reasoning. Even if sexual liberation could be measured independently of premarital sexuality, there is still a problem about direction of causation between attitudes and behavior. Do sexually liberated attitudes cause sexually liberated behavior, or do attitudes conform to behavior? A large technical literature on the connection between sexual attitudes and behavior suggests that both directions of causation might apply (Moors, 2000). Young women who cohabit become more sexually liberated in their attitudes following this experience, for example. Such evidence once again highlights the difficulty of establishing scientific independence between attitudes of sexual liberation and sexually liberated behavior. The sexual liberation hypothesis of increasing single parenthood with urbanization is not a genuine explanation: it does not provide a causal explanation for this historical change. Even if one admits that female residents of Paris produced 30% of their offspring outside wedlock in 1880 due to sexual liberation, this does not solve the fundamental problem of why Parisiennes were so much less liberated a century earlier when nonmarital birth ratios were below 5%. Sexual liberation interpretations may deflect attention away from the real drivers of historical change, of which economic factors seem particularly important. The influence of economic constraints on family formation is well illustrated by English historical research dealing with local increases in nonmarital birth ratios (Abrahamson, 2000). Historical "Outbreaks" of Single Parenthood in English Communities Single parenthood is rarely even mentioned by anthropologists, suggesting that it would have been difficult for women in the evolutionary past to raise children alone. Similarly, throughout the era of written history, single parenthood was not a practical alternative and was chosen only as a last resort by women who failed to marry. In addition to the economic difficulties of single parenthood, illegitimate children were at a real social disadvantage in England. They were stigmatized, or ostracized, and suffered real legal disadvantages in the sense of not being able to inherit property, for example. The great majority of English women, typically in excess of 95%, were married when they gave birth, suggesting that the minority of single mothers were victims of ill fortune due to unintended pregnancy combined with an inability to demand marriage from the father (Shorter, 1975). In such a social environment, women raised children alone only for lack of a better alternative. Marriage prospects were severely curtailed by economic problems. This phenomenon is illustrated in English history, where crop failures forced couples to delay marriage because they lacked the economic resources to set up an independent household. If they were sexually active before marriage, this meant they were at greater risk of producing out-of-wedlock births. When Abrahamson (2000) examined historical surges in local out-of-wedlock birth ratios in England between 1590 and 1985, he found that all eleven cases of high nonmarital birth ratios followed an economic downturn. This phenomenon may be illustrated by the case of Terling, a small agricultural community 30 miles northeast of London. Between 1560 and 1590, nonmarital births were low, even by historical standards, constituting between 1% and 2% of total births. The illegitimacy ratio rose between 1590 and 1605, when it reached 10%. Abrahamson attributes this increase to an economic phenomenon that is familiar from more recent periods, namely price inflation. Terling's economic problems began in the 1580s and can be traced to population growth. With more mouths to feed, and an increased demand for food, prices soared. Price inflation eroded the purchasing power of wages, making it difficult for the landless poor, to make ends meet. This bad food scarcity was aggravated by a series of crop failures during the 1590s. The worsening economic situation made it economically impossible for many young couples to marry, and set up households, even if the woman was pregnant. Being an unmarried mother invited legal sanctions and pregnant women could be punished, for immorality, or "fornication." At the peak of the illegitimacy "outbreak," legal enforcement was comparatively lax. Only a third of unmarried pregnant women were prosecuted, compared to three-quarters of them in more normal times. Many women were excused prosecution on the understanding that they would marry when their fortunes improved. This comparative leniency evidently reflected some understanding that marriage was constrained by difficult economic circumstances. When the economy improved, fornication laws were enforced more rigidly again. A similar change occurred in respect to enforcement of prostitution laws. In the difficult period after 1590, when few young men were marrying, and the services of prostitutes were in high demand, enforcement of vice laws was also relaxed, providing further evidence of the plasticity of moral, and legal, codes in the face of changing economic conditions (Abrahamson, 2000). The constraints faced by young women in 16th-century England are obviously very different from the situation of modern women. The use of effective birth control, for example, means that single women are quite unlikely to become pregnant as a result of delayed marriage today. Even so, economic conditions affect the marriage market and single parenthood ratios of the 20th century in complex ways. This phenomenon has often been highlighted in connection with the marriage difficulties of African American women, for example. African American scholars, including Wilson (1997), emphasize the impact of declining job prospects for African American men on single parenthood. He points to the decline in well-paid blue-collar manufacturing jobs in the U.S. after about 1950. Many African American men were subsequently forced into poorly-paid dead-end service jobs that provided little chance of supporting a family. According to Wilson, this meant that a large proportion of African American men were economically disqualified from marriage. The scarcity of men who were economically qualified for marriage was exacerbated by a host of other factors, reducing the availability of men for marriage. They included: low sex ratios at birth, higher mortality of young men, marriage of more black males than females outside their ethnic group, and high rates of incarceration in prisons. In 1950, for example, there were approximately 70 employed men aged 20-24 years per 100 same-aged women (Staples, 1985). Thirty years later, in 1980, there were only, 50 marriageable men per 100 women in this age category. Other research supports the hypothesis that reduced marriage opportunities of African American women play an important role in accounting for their high single parenthood ratios. Thus, African Americans living in metropolitan areas where there is a scarcity of marriageable men have higher ratios of single parenthood (Fossett, and Kiecolt, 1991). Based on state-level data, South and Lloyd (1992) found that ratios of nonmarital births decline with increases in availability of marriageable men (as indexed by the sex ratio). South (1996) found, however, that although young women were more likely to marry as the availability of males increased, increases in the proportion of males in high schools increased the chances of single parenthood, a puzzling result that is inconsistent with the rest of the literature. Births to African American teens (the great majority of which are to single mothers) were also predictable from reduced mate availability according to research comparing U.S. metropolitan areas (Barber, 2002b) and states (Barber, 2002c) in analyses that controlled for poverty and unemployment. The same economic principles thus help explain why single parenthood was common among 20th-century African Americans as well as 16th century farmers in England. A similar logic applies to poor 20th-century European Americans also. In some economically depressed White neighborhoods, including the lower end of South Boston the majority of children are born outside marriage (73% in 1990, Whitman, 1996). Where there is a severe scarcity of marriageable men, (which is more likely in poor communities), women must choose between raising their children outside marriage or forgoing reproduction altogether. The marriage market, and the economic variables affecting it thus provides a good understanding of historical changes in single parenthood. This conclusion is also supported in time series analysis of single parenthood in England, Scotland, and the U.S. (Barber, 2004 a). A similar pattern emerges from cross-national studies, as well as comparisons among U.S. states and metropolitan areas (Barber, 2000 a, 2000 b, 2001, 2002 a) that controlled for numerous variables such as female literacy, contraception use, poverty, unemployment, incarceration rates, and so forth. Whatever unit of analysis, or time period, is studied, the data are consistent in showing that young women who face a scarcity of marriageable men are more likely to begin their reproductive careers early in life and to raise their children with minimal paternal investment, consistent with the anthropological conclusion that if men cannot be relied upon to provide long term parental investment women gravitate to earlier reproduction (Draper and Harpending, 1982). The data on single parenthood are thus consistent with assumption 3, stating that historical changes and societal differences are due to the same mechanisms. Of course, these data do not guarantee such uniformity for other areas of study but they do at suggest that ESS is a workable research strategy. Environmental influences on reproductive strategies do not end with the marriage market, of course. Within a society, or community, particular individuals are more or less likely to form long-term, committed, romantic relationships depending, in part, on their childhood experiences, including the stresses of poverty (or parental divorce). This phenomenon thus provides a concrete example of Assumption 4 -- that different social contexts modify psychological development adaptively. Poverty and the Emotional Basis of Single Parenthood There is no doubt that economic disadvantage impaired marriage formation over many centuries of European history. A crucial question to ask in this connection is whether individuals make adaptive emotional adjustments that allow them to fit in with an environment of reduced marital opportunity for either sex. Perhaps surprisingly, there is fairly good evidence that the emotional development of the individual is modified in ways that help her, or him, to fit in with economic, and romantic, limitations of the local environment. To begin with, one finds that the emotional tone of low-income households is very different from that of more affluent ones. Poverty is accompanied by greater emotional negativity in the home as revealed by research on content analysis of speech, problem-solving by children, child abuse, antisocial behavior, mental illness, and so on (Barber, 2002 a; Hart and Risley, 1995]. Exposure to negative emotionality in early life evidently reduces trust, and commitment, in future relationships, particularly intimate ones, like close friendships and marriage (Belsky, Steinberg, and Draper, 1991). If poor children experience more emotional negativity in early life, does this mean that they have greater difficulty in establishing the trust required for stable reproductive relationships? Is poverty within a society a useful predictor of individual differences in emotional commitment problems? One way of assessing this question is to investigate the effects of parental income on single parenthood ratios in cross-sectional research. If women are raised in poverty are they more likely to reproduce as single mothers, all else being equal? Based on the theoretical perspective of Belsky, et al., (1991) and assuming that poverty is psychologically stressful (Lupien, King, Meaney, and McEwen, 2001), one would predict that poverty should evoke emotional negativity during childhood, thereby increasing subsequent emotional commitment problems, so that people raised in poverty would be more likely to be single parents. Poverty can be measured indirectly in terms of low educational attainment given that education affects a person's earning potential in our society. Low education level is a powerful predictor of single parenthood. Using education level as a proxy measure, it turns out that poor women are considerably more likely to have out-of wedlock births. According to U.S. data for 1994, 46% of the children born to female high school dropouts were outside wedlock, as opposed to just 6% of children born to women with a bachelor's degree. (Respective proportions for high school graduates and women with some college were 30% and 17% respectively Abrahamson, 1998). Similar patterns apply to single fathers. These results suggest a remarkable bifurcation in American society whereby affluent, well-educated, women maintain single parenthood ratios that are not appreciably different from historical norms whereas poor women demonstrate a huge increase in single parenthood, consistent with the emotional development thesis. Of course, poor women are also more likely to raise children alone because they encounter fewer men in their social circles who are economically qualified as marriage partners. Although historical research on single parenthood emphasizes the economic characteristics of males, the greater participation of modern women in paid labor means that their own economic opportunities are an increasingly important influence on family structure. Broadly speaking, there are two distinct subtypes of feminine economic independence. Close to the top of the economic hierarchy, women have the option of raising children independently, although this option is less desirable in some countries than others for various economic and political reasons, such as government contributions to child support. "Murphy Browns" are thin on the ground in the U.S., for example but evidently much more common in social democratic countries like Sweden. Closer to the bottom of the economic hierarchy, poor women may be independent of paternal support of children by necessity, i.e., there is a scarcity of economically qualified men. As these notions imply, wealthy women are more likely to begin their careers as single mothers comparatively late in life, after they have established themselves in careers, (which typically takes some ten years of effort; Goldin, 1995; Kaplan, Lancaster, Tucker, and Anderson, 2002), whereas poor women are more likely to begin their reproductive careers as single mothers earlier in life. Interestingly, a young woman's career prospects can have a major influence on when she begins her family. One of the best measures of career potential is academic success in high school and academic failure greatly increases the probability of single teen childbearing. Data from the National longitudinal Study of Youth indicate that women aged 15-19 yr at the bottom fifth of their high school class in math and reading skills are five times more likely to bear children compared to those in the top fifth (15% compared to 3% per year, Pittman and Govan, 1986). Although most teen pregnancies are unplanned, career motivation affects deliberate reproductive choices in predictable ways. Thus, when teens having high career aspirations find themselves pregnant, they are more likely to have an abortion. Young women with low career aspirations are more strongly motivated to invest their time and energy in raising a child (Barber, 2000 a, Pittman and Govan, 1986). Despite efforts in many social democratic countries of Europe to ease conflicts between work and family, there is often a clash between raising children and developing a career. This conflict may be deduced from the fact that career women postpone reproduction for approximately a decade compared to those without careers (defined as earnings above the lowest 25%, Goldin, 1995). The conflict between careers and early reproduction is thus fairly straightforward and can be thought of as partly a product of conflicting time demands between career and family. The connection between educational failure and early reproduction of single women is rather more complex. To begin with, sub par educational performance predisposes young women, particularly poor ones, to early sexuality for a variety of reasons. Early single parenthood is facilitated not just because of bad career prospects but also by diminished opportunities for marriage. The role of poverty in diminished marriage prospects may be illustrated by comparing various U.S. ethnic groups that differ in average earnings. One measure of marriage difficulty is the proportion of women who reach the end of their reproductive lives without marrying. By the age of 40-44 years, 22% of African American women have never married, compared to just 7% of whites and 10% of Hispanics (Abrahamson, 1998). The poorer groups, (African Americans and Hispanics), thus have substantially higher rates of non marriage compared to Whites. These data constitute a very conservative measure of marriage problems among poor women, however. Thus, although the great majority of Black women eventually marry, they are likely to be unmarried when their first child is born and spend much of their peak reproductive years as single mothers due to delayed marriage and to marital instability. Data on first births before marriage provide a clearer picture of the marriage market difficulties of poorer U.S. ethnic groups. Between 1990 and 1994, three quarters of African American first births were before marriage compared to two-fifths for Hispanics, and a quarter for whites (Abrahamson, 1998). Women from poorer ethnic groups are thus considerably less likely to marry before giving birth for the first time. Note that African American women are considerably more likely to be single at the time of their first birth compared to Hispanics although there are minimal differences in income, that actually favored African Americans at this period. These differences probably reflect the scarcity of young males in African American communities due to early deaths, illnesses, accidents, and incarceration, among other factors (Barber, 2002a). In addition to the adaptive pattern of relationships between single parenthood and economic factors (including the marriage market) children raised in poverty generally experience a psychologically harsher early life that militates against the trust, commitment, and empathy, that form the basis of successful marriages (see below). Alternatively stated, in an environment where marriage is less viable as a reproductive strategy, children mature with less interest in, or potential for, stable romantic relationships. This suggests that children are raised to fit in with the practical realities of adult life in their particular community. In other words, it suggests adaptive flexibility in the development of human sexual behavior. Further evidence for this interpretation is provided by research on the development of sexual behavior as a function of parental income. Poverty and Adaptive Flexibility in Sexual Development One way in which poverty affects single parenthood is clearly through the limitations it places on marriage formation, as illustrated both by historical and contemporary research using various methodologies. Sexologists have long been aware of differences in sexual behavior as a function of socioeconomic status and it seems reasonable to classify such differences as manifestations of a more general phenomenon of adaptive flexibility in sexual development. Generally speaking, being raised in poverty predisposes men to short-term relationships, or the low-investing "cad" strategy described by evolutionary psychologists (as opposed to the high-investing "dad" strategy, (Cashdan, 1993; Draper and Harpending, 1982). Men would not succeed in their career as cads if this were not tolerated, or even encouraged, by women having a similarly short-term perspective on sexual relationships, however. This argument is clearly supported by anthropologist Elizabeth Cashdan's (1993) research on sexual strategies of college students. She found that women's and men's sexual behavior varies considerably as a function of their expectations about masculine commitment in sexual relationships. Cashdan concluded that the less emotional commitment women expected from men in their dating pool, the more short-term their own perspective was. Women who believed that their dating environment was full of cads, dressed provocatively, and had many sexual partners. On the other hand, if they encountered many potential dads, i.e., caring and nurturing men, they behaved more sedately, emphasizing their own propensities for sexual fidelity and chastity. Cashdan reported that cads attracted women by drawing attention to their physical appearance, and sexuality, whereas dads "advertised" their economic assets, or capacity for economic success, as well as their desire for a permanent relationship. While some readers might see such findings as confirming outmoded stereotypes of sex differences in sexual behavior, it is important to recognize that college students, as a group, are arguably more immune from preconceived notions about sexual behavior than other segments of the population and are thus expressing evolved psychological propensities within this particular environment (see Townsend, 1998, for a similar argument concerning medical students). While young college students adapt their dating behavior to the immediate social environment, it is quite clear that some of the variation in sexual behavior is also affected by the developmental environment (as well as genetically inherited variation, Simpson and Gangestad, 1992). Thus, a more stressful early environment predisposes people to short-term, or unstable, sexual relationships, as manifested by the data on children of divorced parents, for example. Other complex childhood stressors, specifically poverty, may have similar effects. Most theories of the influence of stressful home environments on the development of sexual behavior emphasize the pathological aspects, as reflected in social problems like school failure, delinquency, and so forth. Evolutionists are more willing to accept that there is a range of adaptive variation, and that children's responses to stressful rearing experiences may constitute normal function in an adverse environment rather than the breakdown of normal developmental mechanisms. To this end, Belsky, et al. (1991), proposed that children who experience insensitive parenting, which is more characteristic of low-income homes, are better prepared to prosper in a harshly competitive adult social environment. Belsky et al.'s evolutionary theory of socialization pivots on the principle that unresponsive parenting elicits exploitative interpersonal attitudes and antisocial behavior in the second generation. It also produces a short-term perspective towards sexuality. Given that poverty is associated with increased psychological stress among children, it would be predicted to have all of the above effects. The most compelling evidence of emotional negativity in parent-child relationships in poor homes comes from analysis of actual speech content addressed to children in economically disadvantaged homes. Parents provide far less verbal stimulation to children in poor homes, which has important implications for cognitive development in general, and for the development of vocabulary size in particular. Poor parents say much less to their children and what they do say is much likely to have a hostile, emotionally negative, or disparaging, tone, to involve scolding rather than praise (Hart, and Risley, 1995). The implied relative lack of emotional warmth between parents and children has rather obvious implications for future sexual relationships. Thus, poor single teenage mothers often complain about a lack of warmth in relations with mothers according to Musick (1993). Research on the home backgrounds of single teen mothers finds that they experience many psychological stresses, and sources of negative emotionality, when compared to non mothers (Corona, and Tidwell, 1999). Family problems included: the absence of a father figure to provide emotional and economic support; arguments between parents; exposure to drug addiction or alcoholism in the home; parental divorce; physical, sexual, or emotional abuse; and unsatisfactory or unstable relationships with foster homes. Separation from fathers (which occurs more commonly in low-income homes) may engender a sense of emotional deprivation for which early sexual relationships seem to provide an answer. The likelihood of young women being sexually active at an early age, and becoming pregnant in teenage years, is increased by a perceived lack of emotional closeness to their mothers. Many teen mothers describe the relationship with their own mothers as both difficult and distant in interview studies. Some of the mothers are emotionally rejecting and others emotionally dependent on their daughters (Corona, and Tidwell, 1999}. Psychologists find that father absence does not have the same consequences where it is due to bereavement, suggesting a complex interaction of factors in the family environment on emotional development (Barber, 2000 a; Popenoe, 1996). Unsatisfying emotional relationships with parents may produce complex effects on sexual psychology. Many young single mothers have conflicting attitudes to men. Perhaps, consistent with what they may have witnessed around their own homes, they view most men as unreliable, alcoholic, and potentially violent. Conversely, they may entertain unrealistically favorable expectations of their own partners, hoping that once they become pregnant, their boyfriend will fall in love with them and propose marriage. Anderson (1990) paints a vivid picture of the short-term sexual relationships conducted by young African American mothers inhabiting economically depressed inner cities where marriage prospects are diminished by unfavorable economic conditions as well as the scarcity of men. Anderson describes dating in this environment as an odd mixture of calculation and vulnerability wherein young women use their sexuality to manipulate men and often end up pregnant and abandoned. His "streetwise" young men are portrayed as befriending women purely to obtain sexual gratification that they refer to as "hit and run" or "booty." To accomplish their short-term sexual goals, men cater to female fantasies by offering extravagant, if insincere, promises of affection, love, and even marriage. After a young woman finds herself pregnant, she is likely to be abandoned, with contempt. The relationship ends and the cycle begins anew with a different partner. Playing their role as cads to perfection, the "streetwise" man refuses to support the children he has fathered. Where women perceive their world to be full of cads, they also employ short-term reproductive tactics, emphasizing their physical attractiveness and using their sexuality as a bargaining chip to obtain the attention and fleeting affections of men (Cashdan, 1993). Short-term reproductive strategies are clearly not peculiar to America's inner city but can be seen as an adaptive response to difficult economic circumstances in any country. Sex researchers working in the U.S., and Britain, found that working class people, or low-income people, were generally more unrestrained in their sexual attitudes and behavior compared to the rest of the population. During the 1960's, middle class youth tended to catch up with their working class counterparts in terms of premarital sexuality, and other measures of sexuality, however. English research conducted in the 1960s and 1970s nonetheless found that income-group differences persisted in the sense that working class youth were sexually active from an earlier age. (Argyle, 1994) Eysenck (1976), reported that working class Britons were more likely to approve of marital infidelity and to agree that physical gratification is the most important aspect of marriage. He concluded that working class respondents to surveys are more earthy whereas middle class respondents are more moral in their sexual attitudes. Eysenck believed that working class people had more libido. American research conducted at the end of the 1980s reached similar conclusions, finding that college-educated people to be more restrained than others in a wide variety of sexual behaviors. Ironically, poorer people are less satisfied with marital sexuality, (even though they report having sex somewhat more often). They are more likely to have extramarital relationships (at least for men, Argyle, 1994). Recent research suggests that the more short-term sexual orientation of poor people might be attributable to the effects of stress on the developing brain. Among victims of child abuse (psychological as well as physical), for example, early stress alters brain anatomy and function thereby producing a pattern of high sex drive and low sexual satisfaction (Teicher, et al., 2002). Considered as a complex stressor, poverty could have the same type of effect especially considering that stress is a psychological phenomenon that may be produced in emotionally negative homes where no threshold of criminal abuse is passed (Teicher, et al. 2002). Short-term physical relationships are not restricted to the poor, of course. They are conducted by affluent young people on American college campuses, as depicted, for example, in Townsend's (1998) study of sexual relationships among medical school students. His female informants described dozens of sexual relationships, many undertaken for the most trivial of motives. Some women slept with physically attractive men primarily to demonstrate their own sexual desirability. Jilted women occasionally made love with their former lover's best friend motivated solely by spite. Shallow, or even malicious, sexual relationships are clearly not restricted to poor men. The association between poverty and the development of relatively unrestricted sexual behavior of both men and women helps to explain why single parenthood is more common in poor neighborhoods. This implies adaptations of sexual psychology to varied landscapes of economic opportunity. In some cases, these phenomena are quite well understood in terms of psychological development and recent research has begun to pinpoint possible underlying brain mechanisms (Teicher, et al., 2002). Short-term reproductive strategies of men are quite easily accommodated within an evolutionary perspective because they confer increased reproductive success on cads, thus ensuring that a willingness for uncommitted sexual relationships would be promoted by natural selection (Symons, 1979). Why are single mothers willing to accept reduced paternal investment in their offspring? Why Women Accept Reduced Paternal Investment A comprehensive analysis of historical and evolutionary factors affecting single parenthood is not possible without some understanding of the dynamics of marriage markets and their influence on sexual behavior. This sort of analysis was pioneered by Guttentag and Secord (1983). Their cross-cultural and historical comparisons demonstrated that a scarcity of men in the population is generally correlated with "liberated" sexual behavior as women compete for a diminished pool of young men by emphasizing their sexual availability. The scarcity of men in ancient Sparta, due partly to the practice of male infanticide, and to warfare, was used to explain the sexual liberation of women there, for example, whereas the excess of males in contemporary Athens accounted for the extreme preoccupation with feminine chastity in that city. Similarly, Guttentag and Secord's (1983) historical analysis of sexual behavior in the U.S. concluded that a more difficult marriage market faced by young women in the 1960s compared to the 1950s liberated women's sexual behavior, as more women began having intercourse before marriage, and dressed provocatively suggesting sexual availability. Changing sexual behavior can thus be accounted for in terms of changing marriage market dynamics. A scarcity of men, means that some women will inevitably fail to marry and are therefore liable to become sexually active outside marriage. The presence of a pool of sexually active single women essentially sets up an "arms race" whereby women who are interested in marriage must offer pre-marital sexual activity to compete for masculine attention and affection. This is in marked contrast to the coy strategy prevailing in societies where women's marriage prospects are very good and where they advertise chastity as a means of ensuring paternity confidence that is universally desirable for prospective husbands (Barber, 2002a, Symons, 1979). In sexually liberated societies, women thus play a very delicate game of implying that they are ready for sexual intercourse with their boyfriend, while simultaneously denying that are the sort of woman who enjoys many sex partners and is thus undesirable as a wife who provides low confidence of paternity in relation to children of the marriage (Symons, 1979). Such marriage market dynamics are particularly influential in the lives of poor women. For them, the supply of marriageable males is particularly bleak, as already discussed. There is thus a large pool of sexually active single women in low-income neighborhoods, which favors a cad strategy that seems to be particularly common in that environment. The prevalence of short-term reproductive strategies means that both males and females are likely to be sexually active from an early age. (Indeed a women's sexual maturation can be accelerated by a few months by a stressful early environment such as that characteristic of poverty and father absence, Ellis, et al. 2003). A plentiful supply of sexually active single young women favors an opportunistic strategy by young men who can achieve sexual gratification without providing any long term emotional commitment, or paternal investment (in the event of pregnancy). If poverty makes it difficult for men to support their children, their reproductive success is favored by pursuing a cad strategy (i.e., seeking sexual gratification in short-term relationships) and emphasizing mating effort rather than paternal investment. This might be considered the default strategy of male mammals, most of which invest little in offspring and compete aggressively with other males for mating opportunities and reproductive success (Geary and Flinn, 2001; Hewlett, 1992). (The word "strategy" is used here in the technical sense of an evolutionary mechanism, has no connotation of intentionality, and does not imply that people want to have children - only that they behave in ways that are liable to increase their reproductive success). If poor men are less able to provide economic support for their children, then devoting themselves to mating effort rather than paternal investment is adaptive, i.e., generally promotes reproductive success. The cad strategy may work for economically-disadvantaged men. The real question is why young women should forgo most paternal investment by opting to raise children alone. The reasons are complex but the following points should be borne in mind: ? Poor women who do poorly at school, have less to lose, socially or economically, from early childbearing. In fact, bearing a child gives them a sense of importance and accomplishment that they did not get from their academic efforts (Barber, 2000a; Musick, 1993). ? Young mothers may anticipate a long-term relationship when they become sexually active. ? Fathers of children borne by teenage women are characteristically more than three years older than the mothers (Landry, and Forrest, 1995). The father's relative maturity can be flattering to younger women and means that he tends to be more attractive as well as more controlling. ? In addition to gravitating to older men, poor young women evidently prefer to associate with socially dominant men as well. Thus, gang leaders are much more sexually active than other gang members and have more sexual partners (Palmer, and Tilley, 1995). This implies that women living in poor urban neighborhoods select men on the basis of qualities associated with social success there, i.e., on the basis of attributes such as social dominance, ruthlessness, and aggression, that generally have negative connotations for women in more affluent neighborhoods. This suggests adaptive design because they are acquiring for their male offspring qualities associated with reproductive success in the local environment (Barber, 1995). ? The majority of poor single teen mothers have a history of some kind of childhood sexual abuse (Barber, 2000a; Boyer and Pine, 1992; Musick, 1993). This has the effect both of advancing the age of voluntary sexual activity and works against the development of social skills that would facilitate equity in their sexual relationships. Looking at the world from a very different perspective, social workers are inclined to see teenage child-bearing as both self-defeating and pathological, which it might be in more affluent circumstances. Yet, a good case can be made that early single parenthood is essentially an adaptive response to an environment in which there is limited economic opportunities for women and where they cannot expect much paternal investment in their children. In a low-investment environment, there is increased emphasis on physical attractiveness in the selection of a sexual partner (Buss, 1994). When competing amongst each other for the attentions of a low-investing partner, women emphasize their own sexuality and also use sexual favors to manipulate men (Cashdan, 1993; Townsend, 1998). Thus, if she wishes to leave an undesirable home environment, a young woman may initiate a cohabiting relationship to obtain free accommodation (Musick, 1993). Male partners are likely to be physically strong and socially dominant (Palmer and Tilley, 1995). From the perspective of a social worker, it is difficult to see displays of aggressive masculinity, or promotion in a criminal organization, as measures of social success but poor women are attracted to socially dominant men for the same reason that middle-class women are likely to be attracted to mild-mannered professional men with high earning ability - these are different measures of social success in very different social environments. By being attracted to dominant men, women in poor neighborhoods acquire at least temporary access to resources (Buss, 1999; Cashdan, 1993). They also acquire the genetic basis of social success that contributes a competitive advantage to their children. These adaptive considerations are relevant to any comprehensive account of reproductive choices underlying single teen parenthood but they are unlikely to enter the lexicon of social workers. One of the most interesting aspects of sexual behavior in an economically depressed environment - one having an unusually difficult marriage market for women - is that many young women view motherhood in a very positive light and rarely as a mistake. Many look forward to becoming pregnant as a way of obtaining someone to love. Birth of a first baby may also constitute a rite of passage that provides entry to the world of adults and the society of other young mothers (Musick, 1993). Their optimism in the face of formidable difficulties may be one of the most remarkable examples of adaptive modulation of psychological development to a niche of low paternal investment. Willingness to assume the burden of rearing children alone may stem from such optimism. Alternatively, it might reflect unsuitability of biological fathers for the social role of being a parent. Although a nurturant father may contribute a great deal to the happiness, health, and social prospects, of offspring, this argument cuts both ways and antisocial fathers can have the opposite effect. Indeed, criminologists have recently found evidence that living with a criminal father makes children more likely to commit serious crimes (Jaffee, Moffitt, Caspi, and Taylor, 2003). Women living in economically depressed neighborhoods might sometimes prefer to raise their children alone if the presence of an antisocial father increased the likelihood of their children getting involved in criminal enterprises at considerable risk to their lives and liberty. Interview research on poor single mothers in the U.S. finds that many consider teenage childbearing both acceptable and normal (Musick, 1993). They deny the claims of social scientists that they are damaging their own futures, or doing a disservice to their communities by raising children who are at higher risk of criminality, drug addiction, poverty, and so forth. Motherhood provides many of these young women with a sense of optimism, purpose and meaning in their lives and allows them to hope for a better future. One of Judith Musick's (1993) informants articulated these sentiments clearly in a diary entry: "I Like it when people notice I'm having a baby. It gives me a good feeling inside and makes me feel important." "Baby will be here any day now and I will be a proud Teen Mom with my head held high." pp. 110-111 While single parenthood increases, almost inevitably, with declining marriage prospects for women this is not the complete picture. The modern environment evidently creates situations in which single parenthood may actually be the desirable, or preferred option, even though such a scenario was rare, or nonexistent, throughout the two-million-year-plus history of our species. Before taking up that theme in relation to changes in family structure in Sweden, it is desirable to say something about "values" explanations of sexual behavior that remain influential among scholars. Values and the Single Mother Values interpretations of human sexual behavior rest on the notion of free will, i.e., that there are good and bad options that are voluntarily chosen by freely-acting agents. The concept of free choice of family structure, although widely accepted is problematic for scientists: if each individual was really autonomous, social scientists would be irrelevant in the sense that they could not predict human behavior. Even if belief in free will is largely inconsistent with scientific inquiry, social scientists are forced to come to terms with arguments that human behavior is determined by "values" that are either propagated passively into individuals by their social environment, or chosen voluntarily from an array of alternatives. They do so in at least three distinct ways. The first is to interpret free will as a popular illusion irrelevant to scientific analysis. The second is to write free will off as a source of noise, or unexplained error in the data. The third is to use it as an independent variable, or predictor. This can be done in various ways, including experiments that either encourage, or frustrate a person's sense of autonomy. In one well-known experiment (Lepper and Greene, 1975) children who were paid for drawing with felt-tipped pens lost their enthusiasm for this activity when payments were stopped, thereby providing evidence that they are governed by intrinsic motivation for some behavior that can be affected by providing external rewards. In social research, the third of these alternatives is frequently employed when choices are studied in the form of attitudes measured at an earlier point in time to see whether they are helpful in predicting subsequent behavior. As intimated above, this enterprise has produced mixed results. Evidence suggests that, sexual behavior affects sexual attitudes just as much as sexual attitudes affect behavior (Moors, 2000). If attitudes and behavior are not clearly separable, they do not satisfy the criterion of independence between causes and effects that is a fundamental, assumption of scientific explanation. If behavioral attitudes, or self-reported choices, are to avoid tautology, (i.e., circularity), and provide useful scientific explanations of behavior, they must be truly independent of the behavior they are used to predict. Even if sexual attitudes could be separated from behavior, there are many reasons why individual behavior might not conform to attitudes, or preferences, illustrating a further weakness in values as a scientific construct. A person who is addicted to cigarettes may hate their addiction, for example, but feel powerless to stop smoking. The social environment often frustrates individual choices, as well, and this is clearly true in the case of single parenthood that may be a product of limited marital opportunities for young women. Thus, research on the attitudes to marriage of young African American women found that they strongly endorsed the value of marriage at a time when few Black women could hope to marry before having their first child. Moreover, exactly the same proportion of African American women as the rest of the population believed that it was desirable to marry before raising a family, although they were more than twice as likely to do the opposite, i.e., raise their first child outside marriage (South, 1993). The mismatch between family aspirations and actual reproductive behavior is not peculiar to African Americans, of course. Thus, the majority of Americans believe strongly in the permanence of marriage, even though there has been a sharp rise in divorce rates, and in numbers of cohabiting couples, who substitute an informal, often temporary, union for a more permanent, more binding one (Smock, 1999). Despite these inconsistencies, the married family remains the statistical norm in the sense that nine Americans out of 10 still marry and that the majority of children spend most of their childhood in married households (including step parents, Wellner, 2002). In many European countries, including France, where more women aged 20-24 yr now live with their boyfriends than live with husbands, matters are very different (Ekert-Jaffee, and Solaz, 2001). Approximately 85% of French marriages begin as cohabiting arrangements. Sweden is an interesting country in the sense that single parenthood is currently the norm there. This might be a misleading conclusion, however, because unmarried Swedish women are quite likely to be living with the father of their children. Single Parenthood in Sweden Sweden is sometimes seen as the exemplar of declining marriage and consequently of increasing levels of single parenthood. Thus, Swedish marriage rates declined 40% between 1966 and 1974 alone and are currently at a historic low as well as being one of the world's lowest (Popenoe, 1988). The decline in marriage rates is attributable to a concurrent rise in cohabitation rates. If couples may live in the same home and enjoy all the benefits of marriage without a permanent commitment, why should they marry? Widespread failure to marry is not the only sign of weakness in Swedish marriages. Despite unusually low marriage rates, that would be expected to screen out many potentially incompatible marriages, Swedish marriages are highly unstable compared to other countries at a similar level of economic development. At the end of the twentieth century, Sweden's divorce rate, calculated as a proportion of all marriages, stood at 64%, second only to that of Russia where 65% of marriages ended in divorce (Moffett, 2002). By 1990, about 50% of Swedish men aged 25-29 were cohabiting. As a result, half of births were outside marriage (Chesnais, 1996). Traditional marriages are little more than a historical curiosity and there has been a rapid increase in the number of single young Swedes living alone. Thus, in downtown Stockholm just 37% of households contain married people. With 85% of young women with children under seven in the workforce, the young home maker has receded into history (Sweden's Splashy Women, 1996). Although most births are to single mothers in a technical sense, as a practical matter the great majority of young children live with both parents. It might thus appear that the transition in Swedish families is more a question of appearance than reality. Yet, this is not true because cohabiting unions dissolve much more rapidly than marriages, even in a country like Sweden that has an exceptionally high divorce rate (Popenoe, 1988). In one study of U.S. women born between 1936 and 1960, for example, the dissolution rate for cohabiting couples with one child was triple that of comparable married couples (Smock, 2000) and a similar pattern is seen in Sweden (Popenoe, 1988). Although the data on single parenthood in Sweden may thus exaggerate the lack of commitment of fathers to their children, high ratios of births to single women are nevertheless correlated with a relative lack of commitment of parents to a permanent relationship that reduces the amount of time that fathers will spend living in the same home as their children. Why do so many Swedish couples, compared to the U.S. and other developed countries, avoid marrying before reproducing? The conventional answer to this question may be summed up in two words "welfare state." The Swedish state is so generous in its support of mothers and children that women raising children outside marriage are not exposed to the economic risks encountered by single mothers in the U.S., for example. A comprehensive discussion of the historical roots of the Swedish welfare state is outside the scope of this paper but a few points bear emphasis. The Swedish welfare state grew out of perceived problems of declining population but many of its current characteristics were designed to solve the conflict between careers and family faced by women in most developed countries (Carlson, 1990) so that more married women could work, thereby boosting the Swedish economy. The Swedish solution to this conflict was to nationalize many of the economic functions of the traditional family so that it was easier for Swedish women to raise children without economic cooperation from husbands. Tax reforms of the 1970s also increased the financial incentive for women to work, and reduced their economic dependence on husbands. Thus, high tax rates for jointly-filing married couples were eliminated and married people were taxed separately (Carlson, 1990). Married women's earnings were no longer vulnerable to the high tax rates that had seriously undermined the benefits of a second household income to the extent of discouraging married women from going to work at all. Such changes in the tax code, as well as providing daycare entitlements for mothers, were successful at increasing female labor participation. By 1995, 85% of Swedish women worked outside the home, the highest participation seen in any industrialized country and twice the labor force participation of Italian women, for instance. Many (40%) worked part-time, however, thus limiting potential conflicts between career and family (Home Sweet Home, 1995). Despite working part time, Swedish women do not lose occupational prestige as a consequence. Gender equality is vigorously promoted and women enjoy equal status with men in most occupations (Sweden's Splashy Women, 1996). This is certainly true of politics. After the 1994 election, women held 41% of the seats in the Riksdag, the highest proportion of female political representation in any country, and considerably higher than the 14% of women in the U.S. House of Representatives and in the Senate (as of 2003). Half of the cabinet members (11 of 22) were also women. (Academic life evidently lags other fields in regard to gender equality. Thus, Swedish females must publish twice as much as Swedish males to earn a fellowship in medicine, for example, Wenneras and Wold, 1997). Direct government support of Swedish children is generous. Free school meals, and clothing, and good childcare benefits, mean that no mother is dependent on her husband, or lover, for economic necessities for herself or her children. A man's decision to leave his children does not send the family on a downward spiral into hardship, or poverty, as it does in most other countries. Aggressive enforcement of child support laws also mean that a father's presence in the home is not necessary to ensure his financial contribution to children. As a result of these radical family policies, very few Swedish children live in poverty. In 1990, only 7% of children lived in households with an income under 50% of the national average. In other words, 93% of children lived in comparative affluence (Home Sweet Home, 1995). Conservative scholars have criticized the Swedish welfare state for weakening married families by taking over many of the economic functions previously fulfilled by fathers (Popenoe, 1988). Expansion of the welfare state has indeed been accompanied by a rapid, and historically unprecedented, increase in births outside wedlock, from 11% in 1960 to 53% in 1995 (Home Sweet Home, 1995). Although 19 out of 20 babies begin life under the same roof as their fathers most will not reach maturity without experiencing parental separation. Although Sweden has a very high ratio of children born to single mothers, this does not have the same implications for children as it would in many other countries. In addition to being materially provided for, most Swedish children also spend the formative early years of life in two-parent families. It is not too surprising that children of single parents in Sweden turn out very much as children of married couples do in other countries given that domestic arrangements are quite similar despite the lack of a formal marriage contract. In particular, Sweden does not have the social problems associated with single parenthood among poor women in many other developed countries. Birth rates to teenage women are very low, for example at 1% annually compared to 6% in the U.S. (Population Reference Bureau, 1998). This is all the more remarkable given that women are sexually active from a comparatively early age (Carlson, 1990; Popenoe, 1988; Weinberg, Lottes, and Shaver, 1995). The main reason for avoiding unplanned pregnancies may be the widespread use of contraceptives that are easily available and promoted by many years of public education in responsible sexuality. Other factors matter also. One important factor underlying low rates of single teenage childbearing is the fact that Swedish women have unusually good career prospects. They are thus motivated to delay having a family until they are established in careers (see Goldin, 1995). It is interesting that Sweden has low rates of serious crime despite its high rates of single parenthood and the presence of a substantial immigrant population. Other countries with high ratios of single parenthood often have high crime rates because so many children are born in high-risk groups, specifically to poor single mothers. In Sweden many of the births to nominally single parents are to mature affluent women, and few are to poor teenagers. Considering each of these factors, it is perhaps unsurprising that Sweden has much lower rates of serious crime than would be predicted by the ratio of births to single women. Compared to violent crime rates in the U.S., for example, Sweden has ten times fewer assaults, two-and-a-half times fewer rapes, and about 25% fewer murders. based on INTERPOL (1990) data. The fact that the Swedish family system does not produce high rates of crime or other social problems, suggests that single parenthood may be less important than poverty in determining the social problems associated with high nonmarital birth ratios in other countries. This is a risky assumption for at least two reasons however. The first is that Swedish children generally do live with their fathers in the early years of life when the brain is particularly responsive both to stressors and environmental impoverishment. The second is that the increased stress in children's lives attributable to father absence, as measured in terms of stress hormones (Flinn, 1999), may be more pronounced in poor homes for various reasons. Thus, poverty is a complex stressor and any kind of social support, particularly that from co-residing fathers, could mitigate its effects on behavioral development. As well as experiencing less stress due to their social environment and living arrangements, children of affluent single mothers may benefit from have more extensive social support networks. So far as the evolutionarily-relevant aspects of the early environment are concerned, being raised by a single mother in Sweden is evidently not very different from being raised by married parents in other countries. Having come to the end of this summary of data on single parenthood from an ESS perspective, it is time to ask what this perspective contributes to the problem that is new or worthwhile. ESS: Of What Value for Research on Single Parenthood The data on single parenthood suggest that ESS provides the kind of large framework into which many kinds of evidence can be assimilated. Thus, the response of single parenthood ratios to similar influences across time and from one society to another is consistent with ESS (assumption 3) but not with most other perspectives in the social sciences. Moreover, there is little convincing evidence in support of top-down values interpretations of societal variation and very good evidence that such differences are mediated directly through environmental influences on individuals (assumption 2). The most important of such influences include the marriage market, and the economic prospects of single women as compared to the overall well-being of children in two-parent families. Perhaps the most interesting aspect of the application of ESS to single parenthood is the fact that sexual psychology of young women and men varies predictably both as a function of the immediate social environment, and as a function of the developmental social environment (assumption 4). The most important aspect of such variation is arguably the potential for paternal investment in children, as well as the extent to which women are economically independent of such investment (a circumstance that prevails in Sweden due to the provisions of the welfare state) and in many economically developed countries due to expanded professional opportunities for women. A diminished capacity for paternal investment is characteristic of poverty in modern societies, helping to account for variation in sexual behavior, and single parenthood ratios, as a function of income. Moreover, research on brain development points to psychological stress as a possible mediator in the ontogeny of differing patterns of sexual behavior as a function of parental income. Presumably, this example of evolved developmental plasticity would have tracked very different stressors in the evolutionary past, perhaps a scarcity of food rather than the modern stimulus of insufficient monetary resources. Scientific theories perform two essential functions. They organize information and allow it to be stored in an orderly fashion, rather like the ordered arrangement of merchandise in a warehouse. Large scale theories, like ESS can be though of as providing a great deal of space where new information can be deposited. In addition to the role of organizing information, they stimulate research. This is analogous to the owner of a warehouse finding that a bay of the warehouse is empty and sending out to the supplier for the missing item. (In this case, of course, the role of supplier is performed by researchers and scientific knowledge is steadily accumulated instead of ebbing and flowing as in a real warehouse). This paper demonstrates that ESS can accommodate a great deal of information in an orderly fashion. As far as the function of stimulating research is concerned, it should be obvious that the data reviewed here merely scratches the surface of potential research projects in this field. Even so, ESS offers the prospect of revealing new phenomena or helping us to see established facts in a new light. Thus, the persistence of young single parenthood in economically distressed circumstances that is often dismissed as a pathological phenomenon should probably be seen as an adaptive response to a developmental environment characterized by reduced paternal investment. In any case, social workers who fail to make this connection are (as they currently accept) singularly unlikely to succeed in producing behavioral changes. The success of ESS in reconciling many different types of data offers hope that it may do the same for other content areas. One limitation on this conclusion is that most of the data come from economically developed countries where monogamy is the norm. If anthropologists were to apply this approach to subsistence societies, where marriage systems are different, there is no guarantee that they would draw similar conclusions. On the other hand, the fact that this approach works for modern societies means that it passes a more severe test given that our behavior has diverged more from subsistence ancestors. In summary, a few simple evolutionary concepts help to explain a great deal of the variation in single parenthood across time, countries, ethnic groups, and economic classes. This supports the view that the concept of adaptation can be applied to modern societies, even those that have passed through the demographic shift. Doing so not only provides a heuristically useful means of drawing together a great deal of information from many disciplines (including evolutionary biology, anthropology, history, health, sociology, psychology, and economics among others) but offers the prospect of a social science that transcends disciplinary boundaries and may provide universal explanations for social behavior that can be applied at any time, place, or historical context, thus satisfying the basic scientific criterion of universality of explanation and evading the pitfalls of cultural relativism. A reviewer of this paper complained that the assumptions of ESS are not new and this is arguably true if they are taken piecemeal. The focus of the new research strategy is not on any individual assumption, however, but on what they can accomplish if applied simultaneously, something that has not been previously attempted. In particular, ESS aims to unite the time scales of evolutionary psychology and the social sciences. The data on single parenthood demonstrate that this new approach offers a credible method for uniting evolutionary psychology and the social sciences, a problem that has perplexed scholars in these fields for many years (Barkow, Cosmides, and Tooby, 1992). Received 9 March, 2004, Revision received 31 March, 2005, Accepted 29 April, 2005. References Abrahamson, M. (1998). Out-of-wedlock births: The United States in comparative perspective. Westport, CT: Praeger. Abrahamson, M. (2000). Case studies of surges in nonmarital births. Marriage and Family Review, 30, 127-151. Anderson, E. (1990). Streetwise. Chicago: University of Chicago Press. Argyle, M. (1994). The psychology of social class. London Routledge. Barber, N. (1995). The evolutionary psychology of physical attractiveness. Ethology and Sociobiology, 16, 395-424. Barber, N. (1998a). The role of reproductive strategies in academic attainment. Sex Roles, 38, 313-323. Barber, N. (1998b). Sex differences in attitudes to kin, security of adult attachment, and sociosexuality as a function of parental divorce. Evolution and Human Behavior, 19, 1-8. Barber, N. (2000a). Why parents matter: Parental investment and child outcomes. Westport, CT: Bergin and Garvey. Barber, N. (2000b). On the relationship between country sex ratios and teen pregnancy rates: A replication. Cross-Cultural Research, 34, 26-37. Barber, N. (2001). On the relationship between marital opportunity and teen pregnancy: The sex ratio question. Journal of Cross-Cultural Psychology, 32, 259-267. Barber, N. (2002a). The science of romance. Buffalo, NY: Prometheus. Barber, N. (2002b). Parental investment prospects and teen birth rates of Blacks and Whites in American metropolitan areas. Cross-cultural Research, 36, 183-199. Barber, N. (2002c). Marital opportunity, parental investment, and teen birth rates of Blacks and Whites in American states. Journal of Cross-Cultural Research, 35, 263-279. Barber, N. (2003). Paternal investment prospects and cross-national differences in single parenthood. Cross-Cultural Research, 37, 163-177. Barber, N. (2004a). Kindness in a cruel world: The evolution of altruism: The evolution of altruism. Buffalo, NY: Prometheus. Barber, N. (2004b). Reduced female marriage opportunity and history of single parenthood (England, Scotland, U.S.) Journal of Cross-Cultural Psychology, 35, 648-651. Barber, N. (2005). Educational and ecological correlates of IQ: A cross-national investigation, Intelligence, 33, 273-284. Barkow, J. H., Cosmides, L. and Tooby, J. (1992). The adapted mind: Evolutionary psychology and the generation of culture. New York: Oxford University Press. Belsky, J., Steinberg, L. and Draper, P. (1991). Childhood experience, interpersonal development, and reproductive strategy: An evolutionary theory of socialization. Child Development, 62, 647-670. Boyer, D. and Pine, D. (1992). Sexual abuse as a factor in adolescent pregnancy and child maltreatment. Family Planning Perspectives, 24, 3-11, 19. Buss, D. M. (1994). The evolution of desire. New York: Basic. Buss, D. M. (1999). Evolutionary psychology: The new science of the mind. Boston, MA; Allyn and Bacon. Carlson, A. (1990). The Swedish experiment in family politics. New Brunswick, NJ: Transaction. Cashdan, E. (1993). Attracting mates: Effects of paternal investment on mate attraction strategies. Ethology and Sociobiology, 14, 1-24. Chesnais, J. C. (1996). Fertility, family, and social policy in contemporary Western Europe. Population and Development Review, 22, 729-739. Comings, D. E., Muhleman, D., Johnson, J. P. and MacMurray, J. P. (2002). Parent-daughter transmission of the androgen receptor gene as an explanation of the effect of father absence on age of menarche. Child Development, 73, 1046-1051. Corliss, R. (2002, January 28). Does divorce hurt kids? Time, 40. Corona, S. G. and Tidwell, R. (1999). Differences between adolescent mothers and nonmothers: An interview study. Adolescence, 34, 91-97. Cosmides, L. and Tooby, J. (1987). From evolution to behavior: Evolutionary psychology as the missing link. In J. Dupr?, (Ed.), The latest on the best: Essays on evolution and optimality (pp. 277-306). Draper, P. and Harpending, H. (1982). Father absence and reproductive strategy: An evolutionary synthesis. Journal of Anthropological Research, 38, 255-273. Durrant, R. and Ellis, B. J. (2003). Evolutionary psychology. In M. Gallagher and R. Nelson (Eds.), Comprehensive Handbook of Psychology, Vol. 3 Biological Psychology (pp. 1-33). New York: Wiley. Ekert-Jaffee, O., and Solaz, A. (2001). Unemployment, marriage, and cohabitation in France. The Journal of Socio-Economics, 30, 75-97. Ellis, B. J. (2004). Timing of pubertal maturation in girls: An integrated life history approach. Psychological Bulletin, 130, 920-958. Ellis, B. J., Bates, J. E., Dodge, K. A., Ferguson, D. M., Horwood, J. L., Petit, G. S., et al. (2003). Does father absence place daughters at special risk for early sexual activity and teenage pregnancy? Child Development, 74, 801-821. Ember, C. R., and Ember, M. (1994). War, socialization, and interpersonal violence. Journal of Conflict Resolution, 38, 620-646. Eysenck, H. J. (1976). Sex and personality. London: Open Books. Fessler, D. M. T. (2004). Shame in two cultures: Implications for evolutionary approaches. Journal of Cognition and Culture, 4, 207-262. Flinn, M. V. (1999). Family environment, stress, and health during childhood. In C. Panter-Brick and C. Worthman, Eds., Hormones, health and behavior, pp. 105-138. Cambridge: Cambridge University Press. Fossett, M. A. and Kiecolt, M. J. (1991). A methodological review of the sex ratio: Alternatives for comparative research. Journal of Marriage and the Family, 53, 941-957. Geary, D. C. (1998). Male, female: The evolution of human sex differences. Washington, DC: American Psychological Association. Geary, D. C., and Flinn, M. V. (2001). Evolution of human parental behavior and the human family. Parenting: Science and practice, 1, 5-61. Goldin, C. (1995). Career and family: College women look to the past. National Bureau of Economic Research, Working Paper # 5188. Grayson, D. K., and Delpech, F. (in press). Ungulates and the Middle-to-Upper Paleolithic Transition at Grotte XVI (Dordogne, France). Journal of Archaeological Science. Guttentag, M. and Secord, P. F. (1983). Too many women: The sex ratio question. Beverly Hills, CA: Sage. Hart, B. and Risley, T. (1995). Meaningful differences in the everyday experience of young American children. Baltimore, MD: Paul H. Brookes. Hetherington, E. M. and Kelly, J. (2002). For Better of for worse: Divorce reconsidered. New York: W. W. Norton. Hewlett, B. S. (Ed.). (1992). Father-child relations: Cultural and biosocial contexts. New York: Aldine de Gruyter. Home sweet home (1995, September 9). The Economist, 25-27. INTERPOL (1990). International crime statistics. Paris: Author. Jaffee, S. R., Moffitt, T. E., Caspi, A. and Taylor, A. (2003). Life with, or without, father: Benefits of living with two biological parents depends on the father's antisocial behavior. Child Development, 74, 109-126. Kaplan, H., Lancaster, J. B., Tucker, W. T. and Anderson, K. G. (2002). Evolutionary approach to below replacement fertility. American Journal of Human Biology, 14, 233-256. Landry, D. J. and Forrest, J. D. (1995). How old are U.S. fathers. Family Planning Perspectives, 27, 159-161, 165. Langford, C. M. (1991). Birth control practice in Great Britain: A review of the evidence from cross-sectional surveys. Population Studies, 45, S49-68. Lepper, M. R. and Greene, D. (1975). Turning play into work. Journal of Personality and Social Psychology, 31, 479-486. Lopreato, J. and Crippen, T. A. (1999). Crisis in sociology: The need for Darwin. Somerset, NJ; Transaction. Lupien, S. J., King, S., Meaney, M. J. and McEwen, B. S. (2001). Can poverty get under your skin? Basal cortisol levels and cognitive function in children from low and high socioeconomic status. Developmental Psychopathology, 13, 653-676. Moffett, J. (2002). East: Effects of divorce on happiness. Radio Free Europe/ Radio Liberty. Accessed at http:///www.rfert.org on 10/22/02. Moors, G. (2000). Values and living arrangements: A recursive relationship. In L. J. Waite, (Ed.), The ties that bind: Perspectives on marriage and cohabitation (pp. 212-226.). New York: Aldine de Gruyter. Musick, J. S. (1993). Young, poor and pregnant: The psychology of teenage motherhood. New Haven, CT: Harvard University Press. O'Connor, T. G., Caspi, A., DeFries, J. C. and Plomin, R. (2000). Are associations between parental divorce and children's adjustment genetically mediated: An adoption study. Developmental Psychology, 36, 429-437. Palmer, C. T. and Tilley, C. F. (1995). Sexual access to females as a motivation for joining gangs: An evolutionary approach. The Journal of Sex Research, 32, 213-217. Pittman, K., and Govan, C. (1986). Model programs: Preventing adolescent pregnancy and building youth self-sufficiency. Washington DC: Children's Defense Fund. Popenoe, D. (1988). Disturbing the nest: Family Change and decline in modern societies. Hawthorne, NY: Aldine de Gruyter. Popenoe, D. (1996). Life without father. New York: Martin Kessler/Free Press. Population Reference Bureau (1998). World population data sheet. Washington, DC: Author. Quinlan, R. J. (2003). Father absence, parental care, and female reproductive development. Evolution and Human Behavior, 24, 376-390. Shorter, E. (1975). Illegitimacy, sexual revolution, and social change in modern Europe. In T. K. Rub and R. I. Rotberg, (Eds.), The family in history: Interdisciplinary perspectives (pp. 48-85. New York: Harper Torchbooks. Simpson, J. A., and Gangestad, S. W. (1992). Sociosexuality and romantic partner choice. Journal of Personality, 60, 31-51. Smith, P. B. (2004). Nations, cultures, and individuals: New perspectives and old dilemmas. Journal of Cross-Cultural Psychology, 35, 6-12. Smock, P. J. (2000). Cohabitation in the United States. Annual Review of Sociology, Annual 2000, 1-20. South, S. J. (1993). Racial and ethnic differences in the desire to marry. Journal of Marriage and the Family, 55, 357-370. South, S. J. (1996). Mate availability and the transition to unwed motherhood: A paradox of population structure. Journal of Marriage and the Family, 58, 265-279. South, S. J., and Lloyd, K. M. (1992). Marriage opportunity and family formation: Further implications of imbalanced sex ratios. Journal of Marriage and the Family, 45, 440-451. Staples, R. (1985). Changes in black family structure: The conflict between family ideology and structural conditions. Journal of Marriage and the Family, 51, 391-404. Sweden's splashy women (1996, September 7). The Economist, 340, 52. Symons, D. (1979). The evolution of human sexuality. New York: Oxford University Press. Teicher, M. H., Andersen, S. L., Polcari, A., Anderson, C. M. and Navalta, C. P. (2002). Developmental neurobiology of childhood stress and trauma. Psychiatric Clinics of North America, 25, 397-426. Townsend, J. (1998). What women want - what men want. New York: Oxford University Press. Waite, L. J. and Gallagher, M. (2000). The case for marriage. New York: Doubleday. Wallerstein, J. S. (1998). Children of divorce: A society in search of policy. In M. A. Mason, A. Skolnick, and S. D. Sugarman, (Eds.), All our families: New policies for a new century. New York: Oxford University Press. Wallerstein, J. S., and Blakeslee, S. (1996). Second chances: Men, women, and children a decade after divorce. New York: Mariner. Weinberg, M. S., Lottes, I. L. and Shaver, F. M. (1995). Swedish or American heterosexual college youth: Who is more permissive? Archives of Sexual Behavior, 24, 409-437. Wellner, A. S. (2002). The marriage habit: Baby Boomers may have believed in free love, but now they're hooked on matrimony. Forecast, 22(3), 1-3. Wenneras, C. and Wold, A. (1997). Nepotism and sexism in peer review. Nature, 387, 341-343. Whitehead, B. D., and Popenoe, D. (2002). The state of our unions: The social health of marriage in America. Newark, NJ: The National Marriage Project, Rutgers University. Whitman, D. (1996, December 9). White-style urban woes: Why violence and social problems are taking their toll in South Boston. U.S. News and World Report, 46-49. Wilson, W. J. (1997). When work disappears: The world of the new urban poor. New York: Vintage. Nigel Barber: mailto:nbarber at ime.net From checker at panix.com Wed Jul 20 19:55:12 2005 From: checker at panix.com (Premise Checker) Date: Wed, 20 Jul 2005 15:55:12 -0400 (EDT) Subject: [Paleopsych] H-N: From Helping to Hand Grenades: Setting the Bar for Altruism Message-ID: >From Helping to Hand Grenades: Setting the Bar for Altruism by L James Climenhage and Dennis L Krebs http://human-nature.com/ep/reviews/ep03208215.html 5.7.10 [Thanks to Laird for this.] Evolutionary Psychology 3: 208-215 Book Review From Helping to Hand Grenades: Setting the Bar for Altruism A review of Kindness in a Cruel World: the Evolution of Altruism by Nigel Barber, Prometheus Books, 2004. L. James Climenhage and Dennis L. Krebs, Department of Psychology, Simon Fraser University, 8888 University drive, Burnaby B.C., Canada. In Kindness in a Cruel World: the Evolution of Altruism, Nigel Barber suggests that "Kindness exists, but it struggles to stay afloat on an ocean of cruelty that is the default condition for organisms competing for existence on this planet" (p. 9). The main premise of Barber's book is that humans inherit a capacity for altruism that can be enhanced or diminished through nurture. Barber suggests that the core of this capacity evolved through kin selection and is reflected in parental investment. From this center, altruism ripples outward in concentric circles to reciprocity between members of ingroups, systems of cooperation in societies, and relations among nations. However, the larger the circle, the weaker the altruistic dispositions. In supporting this model, Barber adduces a potpourri of evidence drawn from a wide array of disciplines, including evolutionary biology, economics, political science, history, social and developmental psychology, game theory, anthropology, and neuroscience. By and large, this book is a good read for lay people and students, but we fear evolutionary psychologists will find many of the analyses simplified and compartmentalized, and some of the conclusions overgeneralized and sensationalized. We were also disappointed by Barber's failure to define the central construct of the book, altruism, in a consistent manner, and his tendency to use the word to refer to quite different phenomena. The contents of the book Divided into four parts, this book encompasses a large number of topics ranging from those dealt with by mainstream evolutionary psychologists to those with less direct relevance to the evolution of altruism, such as white collar crime and the sexual behaviour of priests and nuns. The four sections of this book are organized as follows. Altruism in man and beast. In the first section, Barber offers a brief introduction to Darwin's theory of natural selection, then goes on to describe Hamilton's model of kin selection, interpreting the self-sacrificial helping behaviours of social species such as bees and spiders in terms of mechanisms that evolved through this process. Barber suggests that "altruism in the sterile honeybee is no different from altruism of a parent towards offspring" (p. 70). Barber accounts for helping among strangers in terms of reciprocal altruism. He reviews arguments for and against the idea that the alarm calls of Belding's ground squirrels qualify as altruistic. In considering the evolution of reciprocal altruism in human beings, Barber emphasizes the significance of emotions such as guilt, shame, and moral outrage, arguing that reciprocal altruism works best in small groups (e.g., hunter gatherers) in which individuals can enhance their fitness by working together and trading perishable goods. Finally, Barber explores an apparently altruistic profession largely overlooked by evolutionary psychologists, suggesting that "heterosexual priests who refrain from sexual intercourse with women could be considered reproductive altruists if their renunciation of heterosexual expression contributed to the welfare and reproductive success of others..." (p. 96). However, acknowledges Barber, there are many selfish reasons for choosing chastity. Barber ends this discussion by offering a lengthy overview of the history of celibacy in the Catholic Church, which includes evidence that many heterosexual priests were in fact not chaste. Growing up to be good. In the second section, Barber considers the development of altruism in children, focusing on self-awareness and the emotions that stem from it, such as embarrassment, pride, and shame. He argues that although non-human species such as dogs may seem to experience moral emotions, they "are not self-aware so they cannot have an abstract appreciation of their effects on others" (p. 102). The ability to think about oneself, Barber argues, enables a person to go against his or her natural selfish tendencies, which to Barber is the "essence of morality." In examining the roles of nature and nurture in the determination of altruism, Barber suggests that parents (he implies a mother and a father.) constitute the moral compass of children. He argues that when this compass points children in the wrong direction, they may grow up to become criminals. In a discussion of altruism among thieves, Barber advances a "genes-load-the-gun, environment-pulls-the-trigger" type of model, attributing the relatively low crime rates of small communities to familiarity and detectability. Invoking the classic prisoner's dilemma game, Barber suggests that criminal acts are equivalent to defections in which individuals advance their own interests at the expense of their communities. Overall, Barber argues that evolved mechanisms that give rise to altruism are activated through parental investment. If parents invest too little, they will create poorly socialized individuals who grow up to be deviants, and in extreme cases, psychopaths. The social impact of kindness. In the third section, Barber considers the link between altruism and health. He discusses the relationship between the neurotransmitter oxytocin and pair-bonding, reviewing evidence that people are more likely to help others if they have a neurochemical bond of affection with them, and that a physically close relationship with an adult early in life promotes normal brain development and health. Barber adduces evidence from Harlow's classic contact comfort studies and orphanage studies conducted in the early part of the twentieth century in support of the idea that people first learn to be social through touch. Touch-deprived monkeys (and children) grow up to be hostile towards peers. According to Barber "early physical contact is also important for developing social trust, which is a vital component of altruism" (p. 176). Social trust mediates the expansion of the concentric circles of altruism, from relations among family members to relations among strangers. Barber reviews research on such charitable acts as donating blood and rescuing Jews during the Holocaust. He discusses the phenomenon of in-group identification, or "groupishness," and reviews classic social psychological studies on conformity. As we can all attest, altruism for our fellows is often absent. Barber ponders how we can explain such incidents of selfishness and cruelty as the failure of bystanders to intervene in emergencies, road rage, child abuse, infanticide by mothers, and sexual abuse of children by parents, strangers, and priests. In accounting for such incidents, Barber takes the reader on a rather long digression regarding the heinous history of the Catholic Church, then examines the underpinnings of hostile driving practices. He considers several reasons why hostile drivers are different from the "normals" of society, and opines that "Many [hostile drivers] have antisocial personality disorder, a comparatively rare problem, that makes it difficult to conform to social rules and obey laws" (p. 294). Kindness and politics. In the final section, Barber considers how we can "tap" evolved propensities to altruism, arguing that our evolved psychological adaptations for cooperation have the ability to "unite strangers or stir up international conflicts" (p. 303). He examines warfare among hunter-gatherer societies, boiling the problem down to ingroup and outgroup biases, which Barber claims may be extremely difficult to overcome. Our cultural evolution from hunter-gatherer tribes to sprawling urban metropolises has created new challenges for our species: "With increased economic development, and increased social complexity, greater conformity is required" (p. 310). Urban environments, according to Barber, give rise to serious problems such as disease epidemics, terrorism and pollution. Barber interprets global pollution in terms of a prisoner's dilemma in which selfish individuals defect and humanity pays the price. Barber argues that the reason why the United States refused to sign on to the Kyoto accord is because the accord left the door open for cheaters by supplying exemptions to underdeveloped countries. He offers an explanation for why other large nations, such as Russia, decided to support the accord, suggesting that evolved mechanisms render humans short-sighted with respect to the environment. Barber closes his book by asking, "How can the existence of evil people be reconciled with adaptations for altruistic behaviour" (p. 357)? In answering this question, Barber discusses the sources of such egregiously selfish crimes as murder and rape as well as white-collar crimes. In the end, Barber concludes that, "Nature is red in tooth and claw unless it is restrained by adaptations of altruism" (p. 368). An Evaluation Clearly, Kindness in a Cruel World touches on many topics. Considered by themselves, Barber's discussions of most issues are engaging but they tend to lack depth. Regarding the human capacity for altruism, Barber offers an array of mini-conclusions, some of which seem inconsistent with others, and he fails to tie them together in a systematic way. These problems become apparent in the book's introduction. Barber opens by acknowledging that it might be difficult to persuade readers that people are naturally altruistic in view of so much evidence that they behave in evil ways, then goes on to argue that "none of these manifestations of evil minimizes the altruistic motive that springs eternal in the human breast" (p. 9). Why not? What does Barber mean by "the" altruistic motive? Or by "springs eternal"? And if this statement is valid, why would kindness struggle to "stay afloat on an ocean of cruelty that is the default condition for organisms competing for existence on this planet" (p. 9). If Barber means that humans have evolved to behave in both kind and cruel ways, we would agree. If he means that dispositions to behave in evil ways cannot compromise altruistic motives, we would disagree and point out that this conclusion is inconsistent with the conclusions he draws about cheating and defection in prisoner's dilemma types of games. We look for clarification and justification of this conclusion, but none is forthcoming, at least in any organized manner. Later in the introduction, Barber alludes to violent criminals who behave in depraved ways, and concludes that "some individuals are indeed born without the capacity to develop a conscience (Others fail to develop sensitivity to persons because of the brutalizing conditions of their childhood.)." We are unaware of any evidence that people with antisocial personalities are 'born' without the capacity to develop a conscience. However, there is evidence that childhood trauma can, and does, have a profound effect on children, leaving some children in a seemingly permanent state of 'arrested emotional development' (Perry et al., 1995; Joseph, 1999). Such evidence minimizes the altruistic motive that springs eternal in, at least, the breast of those with antisocial personalities. In discussing determinants of criminality, Barber concludes that," Criminals are clearly distinguished by genotypes and family environment that reduce their altruistic tendencies and make them more likely to put their selfish interests before the good of the community" (p. 134). Where is the evidence for distinct genotypes? And what of those who "rob from the rich" to "give to the poor?" Is altruism purely defined by "acts" or does intent matter? Is it appropriate to put civil rights leaders such as Martin Luther Kings Jr. who break laws in order to change them in the same category as murders and rapists? Many attempts have been made over the last century to find a genetic link to crime. We do not know of any that have succeeded. Toward the end of the introduction, Barber asserts that, "In poor countries, youngsters are generally much more concerned with the welfare of others than is true of wealthy countries like our own. The reason is simple: much is asked of them" (p. 14). Although there is evidence that children from small rural communities assume more responsibility for caring for their younger siblings and doing household chores than children from more urban environments, this evidence does not establish that children from wealthy countries have any less concern for, say, the welfare of their parents or schoolmates than children from poor countries do. It is misleading to account for complex behaviors by attributing them to simple causes. If only it were the case that parents could endow their children with a concern for the welfare of others simply by asking much of them! A theme that repeatedly pops up throughout this book is the deleterious effects single parenthood is presumed to have on children's moral development: Single parenthood is a major risk factor for crime. Thus, historical increases in crime have been strongly correlated with increases in single parenthood (p. 150). Children raised without their fathers live in a less healthy manner and experience poorer health throughout their lives, on average... [which] can produce a decline in altruism...and an increased risk of becoming a criminal. (p. 176). Children of divorced parents are more likely to "suffer from anxiety and depression, to experience alcoholism and drug addiction, to get in trouble with the law, and to have conflictual relationships with intimate partners and children of their own" (p. 276). We know that incidents of crime are correlated with age, race, region, sex, socioeconomic status, parenting practices, social support and many other variables that, in turn, are correlated with single parenthood. And we know that that correlation does not equate to causation. Still, the underlying message that emerges from Barber's discussion seems to be that single parenthood produces criminals. We find this conclusion uncomfortably overgeneralized. How much of the variance is accounted for by single parenthood when other factors are controlled? What is it about single parenting that disposes some children to crime? Why do the children of most single parents turn out just fine? Indeed, why do some become exemplars of morality? To Barber's credit, he frequently qualifies the overgeneralized statements he makes in one part of his book when he revisits the issues in other parts. For example, when Barber discusses the assistance that siblings render to one another in hunter-gatherer societies, he writes, "Such help is not always an unmixed blessing because of rivalry between siblings. Thus !Kung children left in charge of younger siblings may abuse them. In rare cases they even attempt to drown them. This means that young helpers have to be supervised carefully" (pp. 31-32). This leaves the reader with two seemingly contradictory conclusions. What is Altruism? The subtitle of Barber's book is, "The Evolution of Altruism." The conclusions one reaches about the human capacity for altruism will depend on how one defines the construct. Set the bar low, and it will be easy to achieve; set it too high, it will be impossible. It is often unclear where Barber is setting his bar. In the introduction, he defines altruism as "actions that help another individual at some cost to the altruist." This definition leaves several important questions unanswered. Do behaviors that proffer help to others at "some cost," but with a net gain, qualify as altruistic? What kinds of cost count: material losses, pain, losses in reproductive success, diminished propagation of genes? Are altruistic behaviors defined solely in terms of their consequences, or do intentions matter? Barber implies that if there is a payoff in helping someone, then the helping behavior may not qualify as altruistic (p. 9). He goes on to assert that if a behavior is "predicated on evolved moral emotions like empathy and shame, it is "really" altruistic. But why? Behaviors stemming from these emotions could reap net benefits. Barber then goes on to assert that the "only requirement for altruistic tendencies to evolve is that they should generally increase the biological success of individuals expressing them" (p. 10). So, it would seem, behaviors that help others at a net gain to the "biological success" of the helper qualify as altruistic. On this definition, there really is little challenge in establishing that people are altruistic (i.e., that they behave altruistically). Yet, a few pages later, Barber asserts "An altruist is one who puts the survival or reproduction of another individual before his own" (p. 19). If Barber is defining "biological success" in terms of reproductive success, how could tendencies to put the biological success of others above one's own increase the biological success of those who express them? Although there may be solutions to this problem, depending on how one defines biological success, or fitness, Barber does not offer any. Indeed, he does not even acknowledge that there is a problem. Types of Altruism Related to this issue, Barber includes different kinds of helping behaviors in the same "altruism" category (as do many other evolutionary theorists). In the introduction, he classifies parental care as altruistic. Later he classifies reciprocity and cooperation as altruistic; and still later, heroic self-sacrifice. He insists that "military service is altruistic, in the sense that the combatants sacrifice their personal welfare for the good of others." Although all these behaviors may qualify as altruistic when altruism is used as an overriding, or umbrella, concept, it is important to attend to and acknowledge their differences. Evolutionary theory leads us to suspect that they stem from mechanisms that evolved through different processes and are designed in different ways. In our view, a great deal of the confusion in Barber's book and, more generally, in the literature on the evolution of altruism, could be clarified by distinguishing among three types of altruism-genetic altruism, biological altruism, and psychological altruism-and recognizing that they are all different from cooperation and reciprocity. Genetic altruism . Genetically altruistic behaviors serve to propagate the genes of others at the expense of the alleles possessed by those who emit the behaviors. Put another way, genetically altruistic behaviors reduce an individual's inclusive fitness. This type of altruism seems inconsistent with principles of natural selection. Biological altruism. Biologically altruistic behaviors serve to enhance the individual fitness (survival and reproductive success) of other individuals at an expense to the fitness of those who emit them. Such behaviors may evolve through kin selection. Animals may sacrifice their individual fitness to enhance their inclusive fitness. Although such behaviors are altruistic at an individual and biological level of analysis, they may be selfish at a genetic level. (Indeed, if they are not selfish at the genetic level, they constitute a major challenge to the theory of evolution!) In some circumstances, the best way for an individual to propagate his or her genes is to help others who possess copies of them. Psychological altruism. In everyday discourse, people use the word altruism differently from the ways in which evolutionary theorists use it. Psychologically altruistic behaviors serve to enhance the profit and pleasure, or fulfill the psychological needs, of other individuals at a cost to the profit and pleasure of those who emit them. As explained by such theorists as Batson (2000), Nesse (2000), and Sober and Wilson (2000), there is no necessary connection between evolutionary (genetic or biological) and psychological forms of altruism: As we once heard Richard Dawkins provocatively but accurately point out, an allele that produces bad teeth in horses (and leads to less effective grazing and more grass for others) is an example of evolutionary altruism. Similarly, an allele that leads one to smoke cigarettes, which may cause impotence, birth defects, and early death, is also an example of evolutionary altruism; it reduces one's procreative potential, thereby providing relative reproductive benefits to others. Most people interested in the existence of altruism are not thinking about bad teeth in horses or smoking cigarettes; they are thinking about psychological altruism. (Batson, 2000, p.207) Cooperation and reciprocity. Cooperative behaviors, including those that evolutionary theorists since Trivers (1971) have called "reciprocal altruism," entail making short term survival and reproductive sacrifices in order to enhance one's long term interests, and thus need not be genetically, biologically, or psychologically altruistic. In some conditions individuals can foster their interests by coordinating their efforts with others and engaging in social exchanges that reap gains in trade. As Barber points out, in order for mechanisms to evolve that dispose individuals to cooperate, the mechanisms must contain antidotes to cheating. Conclusion The take-home message of this book is that children inherit both selfish and altruistic propensities that may be stifled or encouraged by the ways in which they are raised: "...altruism is comparable to physical fitness. We cannot expect children to become athletes without any opportunity for physical exercise. Neither can we expect them to help others if they receive no training in altruism...our evolutionary history has...provided us with altruistic motives that grow stronger from exercise" (p. 15). We concur with this general conclusion, but did not find the case Barber advanced in support of it organized as coherently or argued as persuasively as we believe it should have been. References Batson, C. D. (2000). Commentary discussion of Sober and Wilson's "Unto others: A service... and a disservice." Journal of Consciousness Studies, 7, 207-210. Joseph, R. (1999). Environmental influences on neural plasticity, the limbic system, emotional development and attachment: A review. Child Psychiatry and Human Development, 29, 189-208. Nesse, R. M. (2000). How selfish genes shape moral passions. Journal of Consciousness Studies, 7, 227-231. Perry, B. D., Pollard, R. A., Blakley, T. L., Baker, W. L. and Vigilante, D. (1995). Childhood trauma, the neurobiology of adaptation, and "use-dependant" development of the brain: How "states" become "traits." Infant Mental Health Journal, 16, 271-289. Sober, E., and Wilson, D. S. (2000). Morality and "Unto others": Response to commentary discussion. Journal of Consciousness Studies, 7, 257-268. Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35-57. Citation Climenhage, L. J. and Krebs, D. L. (2005). From Helping to Hand Grenades: Setting the Bar for Altruism. A review of Kindness in a Cruel World: the Evolution of Altruism by Nigel Barber. Evolutionary Psychology, 3:208-215. Email [9]Dennis L. Krebs [14]What Makes Us Moral? [15]Kindness In A Cruel World - Further Information References 9. mailto:krebs at sfu.ca 14. http://xml.amazon.com/onca/xml2?&t=darwinanddarwini&dev-t=D2C43HO9U0XYC0&AsinSearch=1591022282&mode=books&type=heavy&page=1&sort=+salesrank&f=http://human-nature.com/web-services/asin-to-html-new.xsl 15. http://xml.amazon.com/onca/xml2?&t=darwinanddarwini&dev-t=D2C43HO9U0XYC0&AsinSearch=1591022282&mode=books&type=heavy&page=1&sort=+salesrank&f=http://human-nature.com/web-services/asin-to-html-new.xsl From checker at panix.com Wed Jul 20 19:55:29 2005 From: checker at panix.com (Premise Checker) Date: Wed, 20 Jul 2005 15:55:29 -0400 (EDT) Subject: [Paleopsych] NYT: To Err Is Human; It's Also a Teaching Tool Message-ID: To Err Is Human; It's Also a Teaching Tool The Chronicle of Higher Education, 5.7.22 http://chronicle.com/weekly/v51/i46/46b00901.htm By DAVID D. PERLMUTTER Some years ago, after I had filed the grades for a large lecture course, one student -- a loyal attendee and diligent note taker -- dropped by my office to thank me for a great semester. After some pleasantries, she produced a typed note and said, "Oh, by the way, professor, here is a point I caught where you were wrong." I blinked, thanked her, and promised I would check on the matter. After she left, I consulted some media-history texts and discovered to my horror that, contrary to what I had informed 250 undergraduates, Thomas Edison's first sound-recording device, his phonograph, consisted of tinfoil wrapped around not a wax disk but a metal cylinder. I considered what my response should be. Obviously I should contact the student and tell her that she was indeed correct. But what about the rest of the class? Did admitting my error matter when the fact involved was, after all, trivial? The textbook did have the facts right. And what about the belatedness of any correction I might make? After some internal debate, I decided that a higher principle was at stake: Even though the semester was over, I had one more lesson to teach my students. Admitting that you goofed is the right thing to do, no matter who you are. The incident, and others like it, drove me to reconsider the ways that I was teaching students to respect the truth. In the past, when I announced that it's vital to get facts correct through checking with multiple reliable sources, I sensed that students were automatically recording my words but not taking them to heart. So I have tried other strategies to explain the importance of verifying facts. Those include citing examples of badly fumbled facts from the worlds of politics and the news media, and dissecting the cognitive pitfalls that lead us to get facts wrong or lure us into thinking that inaccuracy is an acceptable practice. I have found, however, that the single best teaching tool is for a professor to expose his own blunders. My general argument to students is that if you make a mistake about an obvious fact -- that is, one not subject to controversy -- someone, somewhere and somehow, will out you. I provide major and minor examples. Arnold Schwarzenegger, at the 2000 Republican National Convention, recounted how he decided that he would embrace the philosophy of the Grand Old Party after he saw the presidential candidates Richard Nixon and Hubert Humphrey debate in 1968. Problem: Nixon and Humphrey never debated in 1968. Then there's Rudolph Giuliani in the days after September 11, 2001, brandishing a copy of John Lukacs's Five Days in London: May 1940 and claiming that its description of World War II Londoners holding up under the blitz gave him strength in the trying time after the attack on America. Problem: As the author of the book himself pointed out in an essay in The Chronicle Review ([3]"A Final Chapter on Churchill," October 24, 2003), "there is not one word about the blitz in Five Days." Although I devote substantial class time to talking about lies, or deliberate misinformation, I pay particular attention to errors like Schwarzenegger's and Giuliani's -- and mine about Edison -- cases in which the communicator probably did not intend to get anything wrong. I have an ideological agenda behind such instruction. It is no great insight to say that today's youth are cynical. I do not know whether they are more cynical than any previous generation in history -- men and women who reached maturity at the end of the so-called Great War must be rivals for that position -- but my students do have a general perception that they are being lied to by those in authority, perhaps including their professors. Certainly the popular culture they ingest frequently sends the message that government, business, academe, and almost all institutions are engaged in innumerable conspiracies to cover up the truth. In fact, Americans are lied to often. But democracies perish when faith and trust in the institutions of a civil society -- like government and the press -- collapse. So I want to make a case to my students that many of the misstatements they see are the result not of evil cabals but rather of plain boneheadedness. My undergraduates seem to get excited about participating in a Sherlockian quest for error. Taking an example from blogs, which are responsible for publicizing a number of goofs committed or repeated by the mainstream media, I have students scour certain op-ed essays from newspapers and do some fact checking. But here's the twist: In most of the cases, I wrote the op-eds. I think when the students see my eagerness to hear about my own errors, they learn a valuable lesson: We all need to ponder why we make mistakes. And, of course, trying to prove their professor wrong is an added attraction. Once the students turn in their results, I offer them a typology of the kinds of errors that I find myself making -- and usually catching, I hope, before publication. (For graduate students, recommended reading on this subject is David Hackett Fischer's Historians' Fallacies.) In my case, the sins that lead to error are all too glaring, foremost among them laziness. Thanks to the familiar fallacy, I repeat without checking "facts" that I have cited numerous times and assumed must be true because of repetition (my own and that of others), and about which no one has ever challenged me. Edison's disk is a case in point: At some point in the past, when creating my lecture notes and slides, I made the error and then repeated it often enough, with no one correcting me, that it became an assumed verity. You must review claims of fact, even those that you have been repeating for a long time. I also suffer from the fitting fallacy, an error committed because it seems to make complete sense -- that is, to fit into a set of other facts. I've written a number of essays, and recently a book chapter, on the Tiananmen Square protests and crackdown in China. As a visual historian, I focused on pictures of the event, including the famous one of the lone man standing in front of the oncoming tank. I know that, contrary to popular misconception, the man did not confront the tank in the square, but blocks away. But when interviewed by a New York Times reporter on the subject of famous news icons, I referred to "the man standing in front of the tank in Tiananmen Square." I made the same error that I had detected in others because it just seemed to fit. I'm also dogged by the transpositional fallacy, an error that occurs when I inadvertently modify a fact and make it incorrect. I study images of war and the military. I know, as a piece of trivia, that Gen. George S. Patton owned pearl-handled guns. In an essay about today's media coverage of Iraq, I wrote that Patton would have pulled "his pearl-handed revolvers" on reporters. Seeing the sentence in print, I had to reconsider: Patton, I recalled, possessed pearl-handled .45 automatics. Had I made an error? It turns out that the general had owned pearl-handled revolvers as well, but I should have checked before submitting the essay. Such are but a few of the reasons I make errors of fact, not regularly but often enough to provoke eternal vigilance. What I hope my students get out of analyzing their professor's foibles is that everyone -- they, I, the authors of textbooks, the president, Nobel Prize winners, and so on -- makes mistakes. The crucial questions are why the mistakes are made, and what is to be done about them. Our duty as teachers is not to produce students who will always get their facts right, but to foster young thinkers who appreciate that facts are indeed worth getting right, and who then take the most important step of candid self-analysis when they get them wrong. Which is what I did when my student notified me of my error about Edison's phonograph: I sent the whole class an e-mail message admitting my mistake, congratulating their astute classmate, and wishing them a good summer. Not a few wrote back telling me that was the first time they had ever had a teacher admit that he was wrong about anything. I hope it's not the last. David D. Perlmutter is a senior fellow at Louisiana State University's Reilly Center for Media & Public Affairs and an associate professor of mass communication on the Baton Rouge campus. From checker at panix.com Wed Jul 20 19:55:40 2005 From: checker at panix.com (Premise Checker) Date: Wed, 20 Jul 2005 15:55:40 -0400 (EDT) Subject: [Paleopsych] Helga Nowotny: High- and Low-Cost Realities for Science and Society Message-ID: Helga Nowotny: High- and Low-Cost Realities for Science and Society Science 308 (2005.5.20): 1117-8 The author is chair of EURAB, the European Research Advisory Board of the European Commission, and Fellow at the Wissenschaftszentrum Wien, A-1080 Vienna, Austria. E-mail: helga.nowotny at wzw.at Through the ongoing proliferation of images and symbols, information overload and hi-tech-driven media, science increasingly communicates with the public in ways that are deliberately designed and intended to meet the public (and political) imagination. At the same time, the public is led to imagine what the sciences and scientists mean and say. The appeal to the imagination can be pursued through different avenues. One is that of fiction, a recent example of which is Michael Crichton's blockbuster The State of Fear (1 ). In his plot, scientists are colluding with the environmental movement, making up facts when necessary, in order to support a common cause. In a shrewd move of having environmental lawyers rehearse possible arguments that the defense might use against them, he lectures extensively in the guise of the scientific graphs and footnotes and by presenting whatever else looks like scientific evidence, about all that is wrong with global warming. It is a mix of science, advocacy, and a vision of scientists whose idealism leads them astray. It has been on 37 best-seller lists with another book that looks at the impact of environmental change in a very different way: Jared Diamond's Collapse (2 ), which is based on a scholarly analysis of a series of case studies of ancient civilizations. If Crichton's book is taken not as a work of fiction, but becomes equated with one of fact, like Diamond's, do we not run the risk that trust in science will be decided by market forces and continuing sales figures? The public has become accustomed in a media-saturated world to switching between fact and fiction--but how far does this extend? The question I want to pose is whether in the desire to communicate with "society," "science" has contributed to a confusion between facts and fiction, or as the political analyst Yaron Ezrahi described it, between high-cost and low-cost realities (3 ). Ezrahi distinguishes between constructs of the world that require heavy investment of resources, such as time, money, efforts, and skills, and those which engage fewer resources on the part of those who consume these realities. Scientific knowledge constructs high-cost reality, usually based on a densely organized system of concepts, facts, rules, interpretation, methodological skills, equipment, and evidence. As such, the knowledge is not directly accessible to laypersons and remains esoteric. Low-cost realities may be expensive to produce, but are "cheap" to consume. They depend on the immediate experience of the flow of images and sounds. They become the shared means by which the public conceives, imagines, remembers, thinks, and relates or acts in politics. They allow the public to simulate the witnessing of real events without the trouble of being actually there. Low-cost reality is a spectacularly successful commercial product in our culture. Richard Feynman once used the analogy (4 ) of a Mayan priest who had mastered the numerical concept of subtraction and other elaborate mathematical rules. He used them to predict the rising and setting of Venus. However, to explain his approach to an audience who did not know what subtraction is, the priest resorted to counting beans. The important thing, said Feynman, is that it makes no difference as far as the result is concerned: We can predict the rise of Venus by counting beans (slow, but easy to understand) or by using the tricky rules (which are much faster, but it takes years of training to learn them). However, we have not taken the public through the tedium of bean counting, nor--apart from some notable exemptions--focused on teaching the tricks. Instead, we have been proud to re-enact on the public stage the spectacle of the Maya priest stepping forward before the attentive crowd and announcing the rise of Venus--while Venus rises indeed under the applause and to the relief of the viewers. We have learned how to stage such events ourselves and have come to believe that we thereby render a public service. We have largely engaged in the construction of low-cost realities that appeal to emotions and the imagination. There have certainly been charges that selling science as sexy has gone too far (5), amusing as it may be to explain the magic in Harry Potter in scientific terms (6 ). Some have said that by turning the Year of Physics de facto into the Year of Einstein, the point is missed that physics, while central to our understanding of the Universe, is also central to making useful and practical things through engineering (7 ). Although it is exhilarating to think of science's role in extending the frontiers of our knowledge, it is critical that the public remembers how important science is to their day-to-day reality. There are critical issues that need to be discussed, although they are not especially glamorous, such as the ongoing shift between the public nature of science and the tendency toward its propertization (8 ) or the upcoming debate about security-oriented research and the potential clash between the public interest in scientific openness and its security interests. Sexy communication is not going to be enough to inform good decision-making. Declining trust in science and scientific experts has been clear in public controversies like genetically modified organisms (GMOs) or the bovine spongiform encephalopathy (BSE) crisis, as well as in the rejection of scientific evidence regarding vaccination safety in the UK. The Euro-barometer, conducted as an EU-wide survey, probes the state of mind of EU citizens and how they view science and technology. The most recent data are expected to be published in mid-May and, for the first time, will be commented on by a panel of experts. The 2001 survey (9 ) revealed that two-thirds of the public do not feel well-informed about science and technology, and the number of people who believe in the capacity of science and technology to solve societal problems is declining. Trust in science in general seems to be on the decline in many national surveys, although scientists still come out way ahead of politicians or other public institutions. There are currently clear examples of research on the frontiers of science clashing with human beliefs and values. From the United States, voices can be heard deploring the tendency of politicians to interfere with scientific agendas in teaching and in research (10 ) and faith-based opposition to the teaching of evolution and some forms of frontier research, like stem cells continue to raise serious concern. Luckily, creationism/evolution is not an issue in Europe, largely due to the centralized education systems in most countries. However, an analogous situation exists for stem cell research, with some countries, like Germany and Italy, completely opposed. There will be a referendum in Italy shortly on stem cell research. The Catholic church urges the public not to vote, in the hope that the necessary 50% quota will not be reached, and the referendum will be defeated. Although we may welcome greater public interest in science, if only to avoid another backlash in fields like nanotechnology as occurred with GMOs, we must also confront the thorny issue of how contemporary democracies will deal with minorities who, on faith-based or other, value-related grounds, refuse any compromise. There is no reason to believe that Europe will be immune to an ascendancy of groups who oppose otherwise promising lines of research on the basis of their value system. If the values dimension is here to stay, it is far from certain that the usual response of setting up ethical guidelines and committees will suffice, let alone that any of the efforts to "better communicate science" will have any effect. If the goal is a more research-friendly society, one in which research and innovation become embedded in society and an expression of "the capacity to aspire" (11 ), we have to explain what research is and how the process of research is actually carried out. We need to focus more on the processes of research; on the inherent uncertainty that is part and parcel of it; on how bottom-up and top-down approaches intersect; on the actual, and not only idealized, role that users play; and on how research funding agencies work, both on national and supranational levels. We should explain how research priorities are set, because it is not nature whispering into the ears of researchers, but an intricate mixture of opportunities and incentives, of prior investments and of strategic planning mixed with subversive contingencies. We would also be better poised to explain to the wider public the difference between claims or promises made on the part of researchers, depending on whether these claims have been peer-reviewed or not. How should the public know about these rules that play such an important part for the scientific community, see their significance as well as their limitations, unless we explain how they actually work? Or how should they know about the differences in scientific cultures, what counts as evidence, or how consensus is reached with criticism being an essential precondition for moving toward it, if nobody tells them? To observe and explain what scientists are really doing requires that we make the multiple links of interaction between science and society transparent, as well as the institutions that mediate and shape science policies. The dialogue needs to be extended into the world of politics, economics, and culture, including how scientists are influenced by globalization. There is a need for additional capacity building so that civil society can become a partner in this encounter with science. Apart from patient groups or organizations that have sponsored research into orphan diseases, there has been little organized effort in Europe so far. It is only fair to say that much has been accomplished. The initial notion of public understanding of science as a didactically conceived one-way street through which scientific literacy is diffused did not miraculously lead to increased public support for science. It is increasingly being replaced by concepts of public awareness of science and public engagement with science. Activities that have been undertaken in this more interactive and outreaching mode range from the "Physics for taxi drivers" in London (12 ) to the regular public science festivals occupying their place alongside other, cultural, festivals. The 16th International Science Festival which has recently occurred in Edinburgh (13), and the Swiss "Science et Cit?" initiatives stand out (14 ) as good examples of forums that encourage discussion and debate. Almost all member states of the European Union now celebrate European Science Week (15 ). The European Science Open Forum (ESOF) was a highly successful European event in Stockholm in 2004 and will be held again in Munich in 2006. The larger (and richer) research institutions, such as the Max Planck Society in Germany or the CNRS in France, have set up their own outreach and public relations units. The current Framework Programme of the EU foresees outreach activities as an integral part of the contract obligations, although it is regrettable that outreach is not considered a factor in evaluating research proposals. The European Commission's proposed 7th Framework Programme, published on 6 April 2005, foresees an expanded "Science in Society" action line with an increased provisional budget of [euro] 554 million (US$712 million) for 7 years. Successful communication can begin to be measured through short-term indicators, such as improvements in public opinion polls on trust in science or increases in enrollment figures for undergraduate physics or chemistry programs. In the longer term, we will need to measure evolution in the direction of scientific citizenship, which presupposes rights and duties on the part of citizens as much as on the part of political and scientific institutions. Innovation is the collective bet on a common fragile future, and neither science nor society knows the secret of how to cope with its inherent uncertainties. It can only be accomplished through an alliance among the participants and a shared sense of direction. References and Notes 1. M. Crichton, The State of Fear (HarperCollins, New York, 2004). 2. J. Diamond, Collapse: How Societies Choose to Fail or Succeed (Viking, New York, 2004). 3. Y. Ezrahi, in States of Knowledge: The Co-Production of Science and Social Order, S. Jasanoff, Ed. (Routledge, London, 2004), pp. 254-273. 4. R. P. Feynman, QED: The Strange Theory of Light and Matter (Princeton Univ. Press, Princeton, NJ, 1986), pp. 10-12. 5. P. Weinberger, Falter, 16 February 2005, p. 14. 6. R. Highfield, Harry Potter: How Magic Really Works (Penguin, London, 2003). 7. "Einstein is dead," Nature 433, 179 (2005). 8. H. Nowotny, in The Public Nature of Science Under Assault: Politics, Markets, Science, and the Law, Helga Nowotny et al. (Springer Verlag, New York, 2005), pp. 1-28. 9. http://europa.eu.int/comm/public_opinion/archives/ebs/ebs_154_en.pdf 10. A. I. Leshner, Science 307, 815 (2005). 11. A. Appadurai, in Culture and Public Action, V. Rao and M. Walton, Eds. (Stanford Univ. Press, Stanford, CA, 2004), pp. 59-84. 12. www.iop.org/news/860 13. www.edinburghfestivals.co.uk/science/ 14. www.science-et-cite.ch/de.aspx 15. www.cordis.lu/scienceweek/home.htm 10.1126/science.1113825 From checker at panix.com Wed Jul 20 20:29:14 2005 From: checker at panix.com (Premise Checker) Date: Wed, 20 Jul 2005 16:29:14 -0400 (EDT) Subject: [Paleopsych] Telegraph: (Feynman) Banging the drum for principles Message-ID: Banging the drum for principles http://www.arts.telegraph.co.uk/arts/main.jhtml?xml=/arts/2005/06/26/bofey26.xml&sSheet=/arts/2005/06/26/bomain.html [Another review, by John M. Harrison, appended.] (Filed: 26/06/2005) Anthony Daniels reviews Don't You Have Time to Think? by Richard P Feynman. Richard Feynman was, until his death in 1988, the most famous physicist in the world. Only an infinitesimal part of the general population could understand his mathematical physics, but his outgoing and sunny personality, his gift for exposition, his habit of playing the bongo drums, and his testimony to the Presidential Commission on the Challenger Space Shuttle disaster turned him into a celebrity. This book is a selection of his letters. By no means all of them are indispensable, and as the book does not pretend to be a scholarly compilation of his entire correspondence, many might have been culled to make the book shorter. But there are still a great many letters here that are interesting and moving. Feynman's first wife had tuberculosis and died of it in 1945, spending her last months in a sanatorium in Albuquerque while he worked on the Manhattan Project to build the first atom bomb. Feynman wrote to her every day; six weeks before she died, he wrote: "You are a nice girl. Every time I think of you, I feel good. It must be love. It sounds like a definition of love. It is love. I love you." This has great charm. Feynman was widowed when he was 27, and 16 months later, he wrote another letter to his dead wife, which we are told bore the signs of repeated handling, ending: "My darling wife, I do adore you. I love my wife. My wife is dead. P.S. Please excuse my not mailing this - but I don't know your new address." He was too high-spirited, however, to let this tragedy destroy him, and the following year he produced the work that won him the Nobel Prize. Although he was fully aware of his own exceptional powers (how could he not be?), Feynman avoided all pomposity, as if he were always aware of its dangers. He turned down honorary doctorates, with the argument that they debased real ones; he resigned from the National Academy of Sciences because he thought it was a self-congratulatory club of people who were pleased with their own self-proclaimed eminence. On the other hand, he was patient with laymen who wrote to him out of the blue. When they accused him of bad behaviour, which in fact were often only very minor lapses of decorum, he often admitted it with due humility; he dispensed advice to parents anxious about their offspring's career. He said that children should find something that interested them passionately and pursue it regardless of the obstacles. He thought people should have ambitions to do something rather than to be something. Feynman's integrity was very great. He refused to be included in laudatory compendia of Jewish or American scientists, because he said that science was universal and not the property or product of one nation or group. When he was ignorant of anything he admitted it, and refused to be drawn publicly on questions on which his opinion, that of a layman, was worth no more than anyone else's. Only once in these letters did I find him writing something that was patently untrue. An adolescent boy had written to him asking him to evaluate a certain idea that he had had. The boy wanted to publish it as a letter to the Scientific American, and his father was worried that the journal would plagiarise him. Feynman replied that scientists were interested only in the truth, and that questions of priority were of no account for them. This is not so, of course. Arguments about priority of discovery not only do happen, but happen quite frequently, and are often mixed up with questions of national pride. Perhaps Feynman was seeking only to maintain the boy's faith in science. More likely, he himself had always been motivated by intellectual curiosity and wonderment, never seeking rewards, monetary or otherwise, except those of the pleasure of discovery; and he could genuinely not conceive of scientists who were differently motivated. This is testimony both to the very elevated company in which he moved, to his own character and to his confidence in his own genius. It is a sad fact that only supremely gifted people - and then not all of them - are capable of acting without any admixture of baser motives. For the last 10 years of his life, Feynman had many medical problems, including a rare tumour in his abdomen, which he bore with dignity and fortitude. The double Nobel Prize winner, Linus Pauling, wrote to him advising him to take lots of Vitamin C. On this subject, Pauling became something of a crank, but Feynman answered him with admirable tact. This book is worth having for Feynman's last words alone: "I'd hate to die twice," he said. "It's so boring." Anthony Daniels is a writer and retired doctor BOOK INFORMATION Title Don't You Have Time to Think? Author Richard P Feynman Publisher 486pp, Allen Lane, ?20 ISBN n/a ----------- Physics, bongos and the art of the nude http://www.arts.telegraph.co.uk/arts/main.jhtml;jsessionid=ZZHAHQ0P01PYBQFIQMFSM5OAVCBQ0JVC?xml=/arts/2005/06/12/bofey12.xml (Filed: 12/06/2005) M John Harrison reviews Don't You Have Time to Think? by Richard P Feynman. Richard Feynman was the perfect scientist for his day and ours: a pleasure-loving, ego-driven populist, aware of his own mythologising narrative. He was the plain-speaking man who stormed MIT, Princeton and Caltech, helped design the atom bomb, and won the Nobel Prize for his part quantum electrodynamics, an investigation of the world so subtle as to be barely explicable outside particle physics journals. At the same time, he insisted that physics be part of life, and that life be fun. Feynman's excursions in bongo drumming, safe-cracking and the art of the nude are a matter of legend; they have made the Feynman brand as unassailable as his scientific credentials. These letters, selected, edited and introduced by his daughter Michelle, cover Feynman's life from the Princeton days to his death from cancer in 1988 at the age of 69. Their recipients range from his mother to the US Department of State, and their contents vary from "a problem in long division" set for his father to the long-running dispute over his attempt to resign from the American Academy of Science. After his mother has visited him at Princeton in 1940, he writes: "The night you left I had a fellow visit me and we finished the rice pudding and most of the grapes." At Los Alamos in 1945 we discover him reading the encyclopaedia for relaxation. During the Cold War he refuses an invitation to go into intelligence work - a salesman, he thinks, would do better. After winning the Nobel Prize, he turns down an honorary degree because it's like being offered an "honorary electrician's licence", and declines to be included in Tina Levitan's The Laureates: Jewish Winners of the Nobel Prize on the grounds that it is an "adventure in prejudice". Among the sometimes tedious academic correspondence we find generous responses to teenagers who think they might like to get into science; and to science-fiction writers who imagine they have found a way round one immutable physical law or another. Surprisingly, a biography emerges from the stream of data (Michelle found "12 filing-cabinet drawers and thousands of sheets of paper" in the Caltech Archive); unsurprisingly, it demonstrates the very elements of uncertainty Feynman liked physics to have. Contradictions emerge - for example, although he clearly prided himself on the antics that distinguished him from other physicists, there were limits. "Theoretical physics is a human endeavour... this perpetual desire to prove that people who do it are human by showing that they do other things... (like playing bongo drums) is insulting to me." He was insulted, too, when a student newspaper used a "candid" photograph to accompany its celebration of his Nobel Prize: fun wasn't appropriate "on such an important occasion". The most touching exchanges are between Richard and his first wife Arline. They married early, and she was dying of tuberculosis; while he wrangled atom-bomb physics at Los Alamos, she slipped away in a sanatorium in Albuquerque. Grammar divides them as often as geography: Arline punctuates with hurried, breathless, dashes, while Richard resorts to a numbered list. His chatter is relentless. He will do anything to entertain her - he has picked a Yale lock for fun, he has ruined his expensive watch by tinkering with it - but all the time Arline is going down. In an extraordinary rhetorical effort to steady her in the face of death, Feynman writes her a letter about herself as if she is someone else: "I have a problem which I can't handle and I'd like to discuss it with you. Maybe you would know what to do." Suddenly he panics - "I understand at last how sick you are. I am sorry to have failed you, not to have provided the pillar you need to lean upon" - but it's too late. We appreciate the scale of the tragedy when, a year after her death, he writes to her once more: "I love you so that you stand in my way of loving anyone else." This letter, Michelle Feynman records, "is well worn... it appears as though he reread it often". So who is revealed in this book? Certainly the wise educator and boyish, irrepressible clown of his daughter's introduction. But there's no doubt that boyish irrepressibility, especially coupled to a literalistic sense of humour, can be tiring, or that Feynman's populism has a hectoring quality. There's no doubt, either, that his colleagues sometimes felt he was too clever at building his brand. There is a risk that we still see only this construct. Time is a great demythologiser, but for now we have Michelle's favourite father and everyone else's favourite physicist. BOOK INFORMATION Title Don't You Have Time to Think? Author Richard P Feynman Publisher 486pp, Allen Lane, ?20 ISBN n/a From checker at panix.com Wed Jul 20 20:29:20 2005 From: checker at panix.com (Premise Checker) Date: Wed, 20 Jul 2005 16:29:20 -0400 (EDT) Subject: [Paleopsych] CHE: A Web Site for Winners, or, More Likely, Losers Message-ID: A Web Site for Winners, or, More Likely, Losers The Chronicle of Higher Education, 5.7.22 http://chronicle.com/weekly/v51/i46/46a02202.htm By ANDREA L. FOSTER Who would choose to click repeatedly on a big red circle on a Web site, only to be told again and again that he or she is a loser? About 1,000 people a day. That result surprises even the three Stanford University graduate students who started a site called the Winbutton. For a while, visitors to the site who clicked on the red circle -- which contains the word "win" -- stood a slim chance of winning money. But the jackpot was usually no more than 10 bucks. More often than not, clickers were fed a variety of quotes telling them how pathetic they were. Some examples: "At least you still have your looks." "Dude. What can I say. You just suck." "Winning isn't everything ... but you are still a loser." "You lose. You must be a Stanford undergrad. Loser." "To be perfectly honest, I don't know why people click on our site," said Alain T. Laederach, a postdoctoral student in bioinformatics who helps run the site. "I think they're sort of amused by it, by the funny quotes that you get when you lose." Mr. Laederach's personal favorite: "Dear Loser, welcome to dumpsville. Population: you." He and his buddies, Bernie J. Daigle and Brian T. Naughton, who also do research in bioinformatics, started the site about seven weeks ago, mostly for fun but also hoping to make money if enough people clicked on the advertisements that popped up each time users pressed the "win" button. The Web site generated revenue using Google's AdSense program. Winbutton got money from businesses when visitors clicked on their ads. The subjects of the ads were related to words in the "loser" quotes. Google got wind of the site, however, and last week pulled the advertisements, saying Winbutton violates company policy, which bars Web sites that involve gambling from using AdSense. Winbutton is still operating, but the jackpot is now empty. "It was fun while it lasted," says a note on the site. "Our hat goes off to the guys at Google who figured us out pretty quickly." It seemed that most Winbutton fans weren't interested in the ads anyway, says Mr. Laederach. The site took in $150 in advertising revenue but paid out the same amount, divided among 15 winners. "It doesn't seem like our model was perfect," Mr. Laederach acknowledged. He was not even sure that the site complied with federal law. Christine Hurt, an assistant professor of law at Marquette University, argued on the Conglomerate Blog this month that the students' site may have been an illegal gambling business. The three students last month opened a related Web site, the Losebutton. Visitors there view a different quote about winners each time they click on a "lose" circle. That site's jackpot, too, is $0. From checker at panix.com Wed Jul 20 20:29:28 2005 From: checker at panix.com (Premise Checker) Date: Wed, 20 Jul 2005 16:29:28 -0400 (EDT) Subject: [Paleopsych] Guardian: Music of the hemispheres Message-ID: Music of the hemispheres http://books.guardian.co.uk/print/0,3858,5228440-99945,00.html Steven Mithen's The Singing Neanderthals is an interesting but inconclusive examination of the evolution of our musical abilities, writes Peter Forbes Saturday July 2, 2005 The Singing Neanderthals by Steven Mithen 240pp, Weidenfeld, ?20 "Useless ... quite different from language ... a technology not an adaptation". This is Steven Pinker's view of the importance of music in human evolution. Needless to say, Steven Mithen takes the opposite view. For him, the proto-language, the communication system of pre-humans, was as much musical as linguistic, just as baby talk (important evidence for Mithen) is more musical than adult speech. At the moment, the evidence for a decision between these two views is inconclusive but Mithen builds his passionate case from recent work on the language of humans and apes and from the fossils of early man (Mithen is a professor of early prehistory at Reading). The crux of the relationship between language and music is the mystery of perfect pitch. This is the ability, possessed by only one in 10,000 of the adult population, to name any note they hear being played or to sing a named note on request. Although the incidence of perfect pitch is higher among musicians than in the general population, it is still rare even among them. The odd thing is that many more babies and small children than adults seem to have perfect pitch. As Mithen says, music has been oddly neglected in psychological studies, though one theory has it that we are all born with perfect pitch but lose it unless it is reinforced by music lessons between the ages of three and six. Why would we lose something so useful? Because for most of us who are not to going to be musicians it isn't useful at all: it interferes with learning language. In learning language we have to recognise words from the stream of sound even though they come in different accents and pitches. Perfect pitch would be like a digital scanner that could only read letters presented in the correct typeface. Sadly, there are cases, documented by Mithen, of severely autistic children with little or no language skills but supreme musical ability (musical savants). Perfect pitch is associated with their language difficulties. The contortions of perfect pitch show just how complex is the relationship between music and language. It has been known for a long time that many people with language difficulties can sing perfectly happily. In the mildest cases, stammerers can usually sing fluently. Some people who have lost their language through brain lesions retain their musical ability and vice versa. It was once thought crudely that language was a left-hemisphere phenomenon and music right, so that if the left hemisphere were damaged, the music function would be unimpaired. But it is more complicated than that. There is relative localisation; tunes are processed separately from language but the words of a song still have to be retrieved from the language word store. Nevertheless, the words of songs are usually easier to retrieve than those of tuneless poems. Half of the book is concerned with the roots of music in our pre-human past and half with the evidence from neurophysiology and psychological experimentation on humans and primates. Much is still unprovable conjecture but there are some suggestive insights. One such is the connection between music and walking upright. Some seek the essence of music in pitch, melody, or harmony, but the first essential was surely a regular rhythm. Chimpanzees can't keep a regular beat but it's hard to imagine a human being who could stride in perfectly regular paces never discovering music that beats four to the bar. So in love with the idea of early man's musicality is Mithen that he ends with a strange call to arms. "So listen to JS Bach's Prelude in C Major and think of australopithecines waking in their treetop nests ... with Miles Davis's Kind of Blue imagine them satiated with food and settling to sleep amid the security of the trees." Bach is conventionally cited to show how far we've come from our animal origins and Kind of Blue is the epitome of urban cool - seduction music rather than music to help a greasy tribe sleep off a gross feast. In the end, Mithen's quest to prove Pinker wrong has led him to an equally reductive attitude towards music. ? Peter Forbes's The Gecko's Foot: Bio-inspiration, Engineered from Nature is published by Fourth Estate in August. From waluk at earthlink.net Wed Jul 20 21:49:39 2005 From: waluk at earthlink.net (Gerry) Date: Wed, 20 Jul 2005 14:49:39 -0700 Subject: [Paleopsych] New America Project In-Reply-To: <200305.1121845374806.JavaMail.root@wamui-backed.atl.sa.earthlink.net> References: <200305.1121845374806.JavaMail.root@wamui-backed.atl.sa.earthlink.net> Message-ID: <42DEC6F3.4030800@earthlink.net> shovland at mindspring.com wrote: >>After 6 years, we are less secure, less prosperous, and more divided than we were when they started.>> I'm assuming you are placing blame on the political realm with its Republican leadership rather than the dramatic events that have had their effects on the Western world. Our forgetfulness concerning the heinous events wrought on the East Coast in September of 2001 have subsequently been played out in Madrid and most recently in London. These events perpetrated by Islamic terrorists are what is threatening our peace of mind, desire to travel, our safety in our own home....not the Republican administration. Bush II I doubt began his leadership as a NeoCon. This label, has been tacked onto him by the opposition. Regards, Gerry From shovland at mindspring.com Thu Jul 21 18:20:28 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Thu, 21 Jul 2005 20:20:28 +0200 (GMT+02:00) Subject: [Paleopsych] toxic groups Message-ID: <13556844.1121970029071.JavaMail.root@wamui-bichon.atl.sa.earthlink.net> Stay tuned for further chapters of the "White House Follies" :-) (from Budapest) -----Original Message----- From: Michael Christopher Sent: Jul 20, 2005 8:36 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] toxic groups Steve says: >>Groups eventually recognize that someone in a leadership position is acting contrary to the interests of the group, but not always before a lot of damage is done, for example Enron.<< --True. The offending group often manages to throw a few scapegoats out to delay their own exposure as well. Then they have to start throwing out the "insiders" who get caught, which quickly unravels the group as paranoia and backstabbing sets in. Michael Michael C. Lockhart www.soulaquarium.net Blog: http://shallowreflections.blogspot.com/ Yahoo Messenger:anonymous_animus "We are stardust, we are golden, We are billion year old carbon, And we've got to get ourselves back to the garden." Joni Mitchell "Morality is doing what is right no matter what you are told. Religion is doing what you are told no matter what is right." - Unknown "The most dangerous things in the world are immense accumulations of human beings who are manipulated by only a few heads." - Jung __________________________________ Yahoo! Mail Stay connected, organized, and protected. Take the tour: http://tour.mail.yahoo.com/mailtour.html _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From checker at panix.com Thu Jul 21 20:51:45 2005 From: checker at panix.com (Premise Checker) Date: Thu, 21 Jul 2005 16:51:45 -0400 (EDT) Subject: [Paleopsych] Mr. Mencken's Coverage of the Scopes Trial Message-ID: Mr. Mencken's Coverage of the Scopes Trial http://www.positiveatheism.org/hist/menck01.htm et seq. [Here are thirteen newspaper columns Mr. Mencken wrote on the trial. These appear to have been taken from Marion Elizabeth Rodgers, _The Impossible H.L. Mencken: A Selection of His Best Newspaper Stories_ (NY: Doubleday, 1999), since the site also contains Gore Vidal's foreword to the anthology. [S.T. Joshi's later anthology, _H.L. Mencken on Religion_ (Amherst, NY: Prometheus Books, 2002), contains, in addition, an earlier article, "The Tennessee Circus," _Baltimore Sun_, 1925.6.15. It also has an article from _The Nation_, "In Tennessee," 1925.7.1, and a further _Baltimore Sun_ article, "Round Two," 1925.8.10, and his substantial reworking of the obituary of Bryan for the American Mercury, 1925.10. It was revised again for _Prejudices: Fifth Series_ (NY: Knopf, 1926) and yet again for _A Mencken Chrestomathy_ (NY: Knopf, 1949), the last two revisions being slight. [The final version of the obituary begins, "Has it been duly marked by historians that William Jennings Bryan's last secular act on the globe of sin was to catch flies?" It is one of Mr. Mencken's masterpieces as is the report below for July 13, "Yearning Mountaineers' Souls Need Reconversion Nightly, Mencken Finds." It was also reworked for _Prejudices: Fifth Series_ and again for the Chrestomathy, now entitled "The Hills of Zion," an even greater masterpiece.] [In the Chrestomathy, Mr. Mencken added "My adventures as a newspaper correspondent at the Scopes trial are told in my Newspaper Days, New York, 1943. pp. 214-38.] Homo Neanderthalensis by H.L. Mencken * Index: [1]Historical Writings (Mencken) * Home to [2]Positive Atheism (The Baltimore Evening Sun, June 29, 1925) I Such obscenities as the forthcoming trial of the Tennessee evolutionist, if they serve no other purpose, at least call attention dramatically to the fact that enlightenment, among mankind, is very narrowly dispersed. It is common to assume that human progress affects everyone -- that even the dullest man, in these bright days, knows more than any man of, say, the Eighteenth Century, and is far more civilized. This assumption is quite erroneous. The men of the educated minority, no doubt, know more than their predecessors, and of some of them, perhaps, it may be said that they are more civilized -- though I should not like to be put to giving names -- but the great masses of men, even in this inspired republic, are precisely where the mob was at the dawn of history. They are ignorant, they are dishonest, they are cowardly, they are ignoble. They know little if anything that is worth knowing, and there is not the slightest sign of a natural desire among them to increase their knowledge. Such immortal vermin, true enough, get their share of the fruits of human progress, and so they may be said, in a way, to have their part in it. The most ignorant man, when he is ill, may enjoy whatever boons and usufructs modern medicine may offer -- that is, provided he is too poor to choose his own doctor. He is free, if he wants to, to take a bath. The literature of the world is at his disposal in public libraries. He may look at works of art. He may hear good music. He has at hand a thousand devices for making life less wearisome and more tolerable: the telephone, railroads, bichloride tablets, newspapers, sewers, correspondence schools, delicatessen. But he had no more to do with bringing these things into the world than the horned cattle in the fields, and he does no more to increase them today than the birds of the air. On the contrary, he is generally against them, and sometimes with immense violence. Every step in human progress, from the first feeble stirrings in the abyss of time, has been opposed by the great majority of men. Every valuable thing that has been added to the store of man's possessions has been derided by them when it was new, and destroyed by them when they had the power. They have fought every new truth ever heard of, and they have killed every truth-seeker who got into their hands. II The so-called religious organizations which now lead the war against the teaching of evolution are nothing more, at bottom, than conspiracies of the inferior man against his betters. They mirror very accurately his congenital hatred of knowledge, his bitter enmity to the man who knows more than he does, and so gets more out of life. Certainly it cannot have gone unnoticed that their membership is recruited, in the overwhelming main, from the lower orders -- that no man of any education or other human dignity belongs to them. What they propose to do, at bottom and in brief, is to make the superior man infamous -- by mere abuse if it is sufficient, and if it is not, then by law. Such organizations, of course, must have leaders; there must be men in them whose ignorance and imbecility are measurably less abject than the ignorance and imbecility of the average. These super-Chandala often attain to a considerable power, especially in democratic states. Their followers trust them and look up to them; sometimes, when the pack is on the loose, it is necessary to conciliate them. But their puissance cannot conceal their incurable inferiority. They belong to the mob as surely as their dupes, and the thing that animates them is precisely the mob's hatred of superiority. Whatever lies above the level of their comprehension is of the devil. A glass of wine delights civilized men; they themselves, drinking it, would get drunk. Ergo, wine must be prohibited. The hypothesis of evolution is credited by all men of education; they themselves can't understand it. Ergo, its teaching must be put down. This simple fact explains such phenomena as the Tennessee buffoonery. Nothing else can. We must think of human progress, not as of something going on in the race in general, but as of something going on in a small minority, perpetually beleaguered in a few walled towns. Now and then the horde of barbarians outside breaks through, and we have an armed effort to halt the process. That is, we have a Reformation, a French Revolution, a war for democracy, a Great Awakening. The minority is decimated and driven to cover. But a few survive -- and a few are enough to carry on. III The inferior man's reasons for hating knowledge are not hard to discern. He hates it because it is complex -- because it puts an unbearable burden upon his meager capacity for taking in ideas. Thus his search is always for short cuts. All superstitions are such short cuts. Their aim is to make the unintelligible simple, and even obvious. So on what seem to be higher levels. No man who has not had a long and arduous education can understand even the most elementary concepts of modern pathology. But even a hind at the plow can grasp the theory of chiropractic in two lessons. Hence the vast popularity of chiropractic among the submerged -- and of osteopathy, Christian Science and other such quackeries with it. They are idiotic, but they are simple -- and every man prefers what he can understand to what puzzles and dismays him. The popularity of Fundamentalism among the inferior orders of men is explicable in exactly the same way. The cosmogonies that educated men toy with are all inordinately complex. To comprehend their veriest outlines requires an immense stock of knowledge, and a habit of thought. It would be as vain to try to teach to peasants or to the city proletariat as it would be to try to teach them to streptococci. But the cosmogony of Genesis is so simple that even a yokel can grasp it. It is set forth in a few phrases. It offers, to an ignorant man, the irresistible reasonableness of the nonsensical. So he accepts it with loud hosannas, and has one more excuse for hating his betters. Politics and the fine arts repeat the story. The issues that the former throw up are often so complex that, in the present state of human knowledge, they must remain impenetrable, even to the most enlightened men. How much easier to follow a mountebank with a shibboleth -- a Coolidge, a Wilson or a Roosevelt! The arts, like the sciences, demand special training, often very difficult. But in jazz there are simple rhythms, comprehensible even to savages. IV What all this amounts to is that the human race is divided into two sharply differentiated and mutually antagonistic classes, almost two genera -- a small minority that plays with ideas and is capable of taking them in, and a vast majority that finds them painful, and is thus arrayed against them, and against all who have traffic with them. The intellectual heritage of the race belongs to the minority, and to the minority only. The majority has no more to do with it than it has to do with ecclesiastic politics on Mars. In so far as that heritage is apprehended, it is viewed with enmity. But in the main it is not apprehended at all. That is why Beethoven survives. Of the 110,000,000 so-called human beings who now live in the United States, flogged and crazed by Coolidge, Rotary, the Ku Klux and the newspapers, it is probable that at least 108,000,000 have never heard of him at all. To these immortals, made in God's image, one of the greatest artists the human race has ever produced is not even a name. So far as they are concerned he might as well have died at birth. The gorgeous and incomparable beauties that he created are nothing to them. They get no value out of the fact that he existed. They are completely unaware of what he did in the world, and would not be interested if they were told. The fact saves good Ludwig's bacon. His music survives because it lies outside the plane of the popular apprehension, like the colors beyond violet or the concept of honor. If it could be brought within range, it would at once arouse hostility. Its complexity would challenge; its lace of moral purpose would affright. Soon there would be a movement to put it down, and Baptist clergymen would range the land denouncing it, and in the end some poor musician, taken in the un-American act of playing it, would be put on trial before a jury of Ku Kluxers, and railroaded to the calaboose. _________________________________________________________________ The Scopes Trial Mencken Finds Daytonians Full of Sickening Doubts About Value of Publicity by H.L. Mencken (The Baltimore Evening Sun, July 9, 1925) Dayton, Tenn., July 9. -- On the eve of the great contest Dayton is full of sickening surges and tremors of doubt. Five or six weeks ago, when the infidel Scopes was first laid by the heels, there was no uncertainty in all this smiling valley. The town boomers leaped to the assault as one man. Here was an unexampled, almost a miraculous chance to get Dayton upon the front pages, to make it talked about, to put it upon the map. But how now? Today, with the curtain barely rung up and the worst buffooneries to come, it is obvious to even town boomers that getting upon the map, like patriotism, is not enough. The getting there must be managed discreetly, adroitly, with careful regard to psychological niceties. The boomers of Dayton, alas, had no skill at such things, and the experts they called in were all quacks. The result now turns the communal liver to water. Two months ago the town was obscure and happy. Today it is a universal joke. I have been attending the permanent town meeting that goes on in Robinson's drug store, trying to find out what the town optimists have saved from the wreck. All I can find is a sort of mystical confidence that God will somehow come to the rescue to reward His old and faithful partisans as they deserve -- that good will flow eventually out of what now seems to be heavily evil. More specifically, it is believed that settlers will be attracted to the town as to some refuge from the atheism of the great urban Sodoms and Gomorrahs. But will these refugees bring any money with them? Will they buy lots and build houses, Will they light the fires of the cold and silent blast furnace down the railroad tracks? On these points, I regret to report, optimism has to call in theology to aid it. Prayer can accomplish a lot. It can cure diabetes, find lost pocketbooks and restrain husbands from beating their wives. But is prayer made any more efficacious by giving a circus first? Coming to this thought, Dayton begins to sweat. The town, I confess, greatly surprised me. I expected to find a squalid Southern village, with darkies snoozing on the horse-blocks, pigs rooting under the houses and the inhabitants full of hookworm and malaria. What I found was a country town full of charm and even beauty -- a somewhat smallish but nevertheless very attractive Westminster or Balair. The houses are surrounded by pretty gardens, with cool green lawns and stately trees. The two chief streets are paved from curb to curb. The stores carry good stocks and have a metropolitan air, especially the drug, book, magazine, sporting goods and soda-water emporium of the estimable Robinson. A few of the town ancients still affect galluses and string ties, but the younger bucks are very nattily turned out. Scopes himself, even in his shirt sleeves, would fit into any college campus in America save that of Harvard alone. Nor is there any evidence in the town of that poisonous spirit which usually shows itself when Christian men gather to defend the great doctrine of their faith. I have heard absolutely no whisper that Scopes is in the pay of the Jesuits, or that the whisky trust is backing him, or that he is egged on by the Jews who manufacture lascivious moving pictures. On the contrary, the Evolutionists and the Anti-Evolutionists seem to be on the best of terms, and it is hard in a group to distinguish one from another. The basic issues of the case, indeed, seem to be very little discussed at Dayton. What interests everyone is its mere strategy. By what device, precisely, will Bryan trim old Clarence Darrow? Will he do it gently and with every delicacy of forensics, or will he wade in on high gear and make a swift butchery of it? For no one here seems to doubt that Bryan will win -- that is, if the bout goes to a finish. What worries the town is the fear that some diabolical higher power will intervene on Darrow's side -- that is, before Bryan heaves him through the ropes. The lack of Christian heat that I have mentioned is probably due in part to the fact that the fundamentalists are in overwhelming majority as far as the eye can reach -- according to most local statisticians, in a majority of at least nine-tenths. There are, in fact, only two downright infidels in all Rhea county, and one of them is charitably assumed to be a bit balmy. The other, a yokel roosting far back in the hills, is probably simply a poet got into the wrong pew. The town account of him is to the effect that he professes to regard death as a beautiful adventure. When the local ecclesiastics begin alarming the peasantry with word pictures of the last sad scene, and sulphurous fumes begin to choke even Unitarians, this skeptical rustic comes forward with his argument that it is foolish to be afraid of what one knows so little about -- that, after all, there is no more genuine evidence that anyone will ever go to hell than there is that the Volstead act will ever be enforced. Such blasphemous ideas naturally cause talk in a Baptist community, but both of the infidels are unmolested. Rhea county, in fact, is proud of its tolerance, and apparently with good reason. The klan has never got a foothold here, though it rages everywhere else in Tennessee. When the first kleagles came in they got the cold shoulder, and pretty soon they gave up the county as hopeless. It is run today not by anonymous daredevils in white nightshirts, but by well-heeled Free-masons in decorous white aprons. In Dayton alone there are sixty thirty-second-degree Masons -- an immense quota for so small a town. They believe in keeping the peace, and so even the stray Catholics of the town are treated politely, though everyone naturally regrets they are required to report to the Pope once a week. It is probably this unusual tolerance, and not any extraordinary passion for the integrity of Genesis, that has made Dayton the scene of a celebrated case, and got its name upon the front pages, and caused its forward-looking men to begin to wonder uneasily if all advertising is really good advertising. The trial of Scopes is possible here simply because it can be carried on here without heat -- because no one will lose any sleep even if the devil comes to the aid of Darrow and Malone, and Bryan gets a mauling. The local intelligentsia venerate Bryan as a Christian, but it was not as a Christian that they called him in, but as one adept at attracting the newspaper boys -- in brief, as a showman. As I have said, they now begin to mistrust the show, but they still believe that he will make a good one, win or lose. Elsewhere, North or South, the combat would become bitter. Here it retains the lofty qualities of the duello. I gather the notion, indeed, that the gentlemen who are most active in promoting it are precisely the most lacking in hot conviction -- that it is, in its local aspects, rather a joust between neutrals than a battle between passionate believers. Is it a mere coincidence that the town clergy have been very carefully kept out of it? There are several Baptist brothers here of such powerful gifts that when they begin belaboring sinners the very rats of the alleys flee to the hills. They preach dreadfully. But they are not heard from today. By some process to me unknown they have been induced to shut up -- a far harder business, I venture, than knocking out a lion with a sandbag. But the sixty thirty-second degree Masons of Dayton have somehow achieved it. Thus the battle joins and the good red sun shines down. Dayton lies in a fat and luxuriant valley. The bottoms are green with corn, pumpkins and young orchards and the hills are full of reliable moonshiners, all save one of them Christian men. We are not in the South here, but hanging on to the North. Very little cotton is grown in the valley. The people in politics are Republicans and put Coolidge next to Lincoln and John Wesley. The fences are in good repair. The roads are smooth and hard. The scene is set for a high-toned and even somewhat swagger combat. When it is over all the participants save Bryan will shake hands. _________________________________________________________________ Impossibility of Obtaining Fair Jury Insures Scopes' Conviction, Says Mencken by H.L. Mencken (The Baltimore Evening Sun, July 10, 1925) Dayton, Tenn., July 10. -- The trial of the infidel Scopes, beginning here this hot, lovely morning, will greatly resemble, I suspect, the trial of a prohibition agent accused of mayhem in Union Hill, N.J. That is to say, it will be conducted with the most austere regard for the highest principles of jurisprudence. Judge and jury will go to extreme lengths to assure the prisoner the last and least of his rights. He will be protected in his person and feelings by the full military and naval power of the State of Tennessee. No one will be permitted to pull his nose, to pray publicly for his condemnation or even to make a face at him. But all the same he will be bumped off inevitably when the time comes, and to the applause of all right-thinking men. The real trial, in truth, will not begin until Scopes is convicted and ordered to the hulks. Then the prisoner will be the Legislature of Tennessee, and the jury will be that great fair, unimpassioned body of enlightened men which has already decided that a horse hair put into a bottle will turn into a snake and that the Kaiser started the late war. What goes on here is simply a sort of preliminary hearing, with music by the village choir. For it will be no more possible in this Christian valley to get a jury unprejudiced against Scopes than would be possible in Wall Street to get a jury unprejudiced against a Bolshevik. I speak of prejudice in its purely philosophical sense. As I wrote yesterday, there is an almost complete absence, in these pious hills, of the ordinary and familiar malignancy of Christian men. If the Rev. Dr. Crabbe ever spoke of bootleggers as humanely and affectionately as the town theologians speak of Scopes, and even Darrow and Malone, his employers would pelt him with their spyglasses and sit on him until the ambulance came from Mount Hope. There is absolutely no bitterness on tap. But neither is there any doubt. It has been decided by acclamation, with only a few infidels dissenting, that the hypothesis of evolution is profane, inhumane and against God, and all that remains is to translate that almost unanimous decision into the jargon of the law and so have done. The town boomers have banqueted Darrow as well as Bryan, but there is no mistaking which of the two has the crowd, which means the venire of tried and true men. Bryan has been oozing around the country since his first day here, addressing this organization and that, presenting the indubitable Word of God in his caressing, ingratiating way, and so making unanimity doubly unanimous. From the defense yesterday came hints that this was making hay before the sun had legally begun to shine -- even that it was a sort of contempt of court. But no Daytonian believes anything of the sort. What Bryan says doesn't seem to these congenial Baptists and Methodists to be argument; it seems to be a mere graceful statement of the obvious. Meanwhile, reinforcements continue to come in, some of them from unexpected sources. I had the honor of being present yesterday when Col. Patrick Callahan, of Louisville, marched up at the head of his cohort of 250,000,000 Catholic fundamentalists. The two colonels embraced, exchanged a few military and legal pleasantries and then retired up a steep stairway to the office of the Hicks brothers to discuss strategy. Colonel Callahan's followers were present, of course, only by a legal fiction; the town of Dayton would not hold so large an army. In the actual flesh there were only the colonel himself and his aide-de-camp. Nevertheless, the 250,000,000 were put down as present and recorded as voting. Later on I had the misfortune to fall into a dispute with Colonel Callahan on a point of canon law. It was my contention that the position of the Roman Church, on matters of doctrine, is not ordinarily stated by laymen -- that such matters are usually left to high ecclesiastical authorities, headed by the Bishop of Rome. I also contended, perhaps somewhat fatuously, that there seemed to be a considerable difference of opinion regarding organic evolution among these authorities -- that it was possible to find in their writings both ingenious arguments for it and violent protests against it. All these objections Colonel Callahan waived away with a genial gesture. He was here, he said, to do what he could for the authority of the Sacred Scriptures and the aiding and comforting of his old friend, Bryan, and it was all one to him whether atheists yelled or not. Then he began to talk about prohibition, which he favors, and the germ theory of diseases, which he regards as bilge. A somewhat more plausible volunteer has turned up in the person of Pastor T.T. Martin, of Blue Mountain, Miss. He has hired a room and stocked it with pamphlets bearing such titles as "Evolution a Menace," "Hell and the High Schools" and "God or Gorilla," and addresses connoisseurs of scientific fallacy every night on a lot behind the Courthouse. Pastor Martin, a handsome and amiable old gentleman with a great mop of snow-white hair, was a professor of science in a Baptist college for years, and has given profound study to the biological sections of the Old Testament. He told me today that he regarded the food regulations in Leviticus as so sagacious that their framing must have been a sort of feat even for divinity. The flesh of the domestic hog, he said, is a rank poison as ordinarily prepared for the table, though it is probably harmless when smoked and salted, as in bacon. He said that his investigations had shown that seven and a half out of every thirteen cows are quite free of tuberculosis, but that twelve out of every thirteen hogs have it in an advanced and highly communicable form. The Jews, protected by their piety against devouring pork, are immune to the disease. In all history, he said, there is authentic record of but one Jew who died of tuberculosis. The presence of Pastor Martin and Colonel Callahan has given renewed confidence to the prosecution. The former offers proof that men of science are, after all, not unanimously atheists, and the latter that there is no division between Christians in the face of the common enemy. But though such encouragements help, they are certainly not necessary. All they really supply is another layer of icing on the cake. Dayton will give Scopes a rigidly fair and impartial trial. All his Constitutional rights will be jealously safeguarded. The question whether he voted for or against Coolidge will not be permitted to intrude itself into the deliberations of the jury, or the gallant effort of Colonel Bryan to get at and establish the truth. He will be treated very politely. Dayton, indeed, is proud of him, as Sauk Center, Minn., is proud of Sinclair Lewis and Whittingham, Vt., of Brigham Young. But it is lucky for Scopes that sticking pins into Genesis is still only a misdemeanor in Tennessee, punishable by a simple fine, with no alternative of the knout, the stone pile or exile to the Dry Tortugas. _________________________________________________________________ http://www.positiveatheism.org/hist/menck02.htm Mencken Likens Trial to a Religious Orgy, with Defendant a Beelzebub by H.L. Mencken (The Baltimore Evening Sun, July 11, 1925) Chattanooga, Tenn., July 11. -- Life down here in the Cumberland mountains realizes almost perfectly the ideal of those righteous and devoted men, Dr. Howard A. Kelly, the Rev. Dr. W.W. Davis, the Hon. Richard H. Edmonds and the Hon. Henry S. Dulaney. That is to say, evangelical Christianity is one hundred per cent triumphant. There is, of course, a certain subterranean heresy, but it is so cowed that it is almost inarticulate, and at its worst it would pass for the strictest orthodoxy in such Sodoms of infidelity as Baltimore. It may seem fabulous, but it is a sober fact that a sound Episcopalian or even a Northern Methodist would be regarded as virtually an atheist in Dayton. Here the only genuine conflict is between true believers. Of a given text in Holy Writ one faction may say this thing and another that, but both agree unreservedly that the text itself is impeccable, and neither in the midst of the most violent disputation would venture to accuse the other of doubt. To call a man a doubter in these parts is equal to accusing him of cannibalism. Even the infidel Scopes himself is not charged with any such infamy. What they say of him, at worst, is that he permitted himself to be used as a cat's paw by scoundrels eager to destroy the anti-evolution law for their own dark and hellish ends. There is, it appears, a conspiracy of scientists afoot. Their purpose is to break down religion, propagate immorality, and so reduce mankind to the level of the brutes. They are the sworn and sinister agents of Beelzebub, who yearns to conquer the world, and has his eye especially upon Tennessee. Scopes is thus an agent of Beelzebub once removed, but that is as far as any fair man goes in condemning him. He is young and yet full of folly. When the secular arm has done execution upon him, the pastors will tackle him and he will be saved. The selection of a jury to try him, which went on all yesterday afternoon in the atmosphere of a blast furnace, showed to what extreme lengths the salvation of the local primates has been pushed. It was obvious after a few rounds that the jury would be unanimously hot for Genesis. The most that Mr. Darrow could hope for was to sneak in a few men bold enough to declare publicly that they would have to hear the evidence against Scopes before condemning him. The slightest sign of anything further brought forth a peremptory challenge from the State. Once a man was challenged without examination for simply admitting that he did not belong formally to any church. Another time a panel man who confessed that he was prejudiced against evolution got a hearty round of applause from the crowd. The whole process quickly took on an air of strange unreality, at least to a stranger from heathen parts. The desire of the judge to be fair to the defense, and even polite and helpful, was obvious enough -- in fact, he more than once stretched the local rules of procedure in order to give Darrow a hand. But it was equally obvious that the whole thing was resolving itself into the trial of a man by his sworn enemies. A local pastor led off with a prayer calling on God to put down heresy; the judge himself charged the grand jury to protect the schools against subversive ideas. And when the candidates for the petit jury came up Darrow had to pass fundamentalist after fundamentalist into the box -- some of them glaring at him as if they expected him to go off with a sulphurous bang every time he mopped his bald head. In brief this is a strictly Christian community, and such is its notion of fairness, justice and due process of law. Try to picture a town made up wholly of Dr. Crabbes and Dr. Kellys, and you will have a reasonably accurate image of it. Its people are simply unable to imagine a man who rejects the literal authority of the Bible. The most they can conjure up, straining until they are red in the face, is a man who is in error about the meaning of this or that text. Thus one accused of heresy among them is like one accused of boiling his grandmother to make soap in Maryland. He must resign himself to being tried by a jury wholly innocent of any suspicion of the crime he is charged with and unanimously convinced that it is infamous. Such a jury, in the legal sense, may be fair. That is, it may be willing to hear the evidence against him before bumping him off. But it would certainly be spitting into the eye of reason to call it impartial. The trial, indeed, takes on, for all its legal forms, something of the air of a religious orgy. The applause of the crowd I have already mentioned. Judge Raulston rapped it down and threatened to clear the room if it was repeated, but he was quite unable to still its echoes under his very windows. The courthouse is surrounded by a large lawn, and it is peppered day and night with evangelists. One and all they are fundamentalists and their yells and bawlings fill the air with orthodoxy. I have listened to twenty of them and had private discourse with a dozen, and I have yet to find one who doubted so much as the typographical errors in Holy Writ. They dispute raucously and far into the night, but they begin and end on the common ground of complete faith. One of these holy men wears a sign on his back announcing that he is the Bible champion of the world. He told me today that he had studied the Bible four hours a day for thirty-three years, and that he had devised a plan of salvation that would save the worst sinner ever heard of, even a scientist, a theater actor or a pirate on the high seas, in forty days. This gentleman denounced the hard-shell Baptists as swindlers. He admitted freely that their sorcerers were powerful preachers and could save any ordinary man from sin, but he said that they were impotent against iniquity. The distinction is unknown to city theologians, but is as real down here as that between sanctification and salvation. The local experts, in fact, debate it daily. The Bible champion, just as I left him, was challenged by one such professor, and the two were still hard at it an hour later. Most of the participants in such recondite combats, of course, are yokels from the hills, where no sound is heard after sundown save the roar of the catamount and the wailing of departed spirits, and a man thus has time to ponder the divine mysteries. But it is an amazing thing that the more polished classes also participate actively. The professor who challenged the Bible champion was indistinguishable, to the eye, from a bond salesman or city bootlegger. He had on a natty palm beach suit and a fashionable soft collar and he used excellent English. Obviously, he was one who had been through the local high school and perhaps a country college. Yet he was so far uncontaminated by infidelity that he stood in the hot sun for a whole hour debating a point that even bishops might be excused for dodging, winter as well as summer. The Bible champion is matched and rivaled by whole herds of other metaphysicians, and all of them attract good houses and have to defend themselves against constant attack. The Seventh Day Adventists, the Campbellites, the Holy Rollers and a dozen other occult sects have field agents on the ground. They follow the traveling judges through all this country. Everywhere they go, I am told, they find the natives ready to hear them and dispute with them. They find highly accomplished theologians in every village, but even in the county towns they never encounter a genuine skeptic. If a man has doubts in this immensely pious country, he keeps them to himself. Dr. Kelly should come down here and see his dreams made real. He will find a people who not only accept the Bible as an infallible handbook of history, geology, biology and celestial physics, but who also practice its moral precepts -- at all events, up to the limit of human capacity. It would be hard to imagine a more moral town than Dayton. If it has any bootleggers, no visitor has heard of them. Ten minutes after I arrived a leading citizen offered me a drink made up half of white mule and half of coca cola, but he seems to have been simply indulging himself in a naughty gesture. No fancy woman has been seen in the town since the end of the McKinley administration. There is no gambling. There is no place to dance. The relatively wicked, when they would indulge themselves, go to Robinson's drug store and debate theology. In a word, the new Jerusalem, the ideal of all soul savers and sin exterminators. Nine churches are scarcely enough for the 1,800 inhabitants: many of them go into the hills to shout and roll. A clergyman has the rank and authority of a major-general of artillery. A Sunday-school superintendent is believed to have the gift of prophecy. But what of life here? Is it more agreeable than in Babylon? I regret that I must have to report that it is not. The incessant clashing of theologians grows monotonous in a day and intolerable the day following. One longs for a merry laugh, a burst of happy music, the gurgle of a decent jug. Try a meal in the hotel; it is tasteless and swims in grease. Go to the drug store and call for refreshment: the boy will hand you almost automatically a beaker of coca cola. Look at the magazine counter: a pile of Saturday Evening Posts two feet high. Examine the books: melodrama and cheap amour. Talk to a town magnifico; he knows nothing that is not in Genesis. I propose that Dr. Kelly be sent here for sixty days, preferably in the heat of summer. He will return to Baltimore yelling for a carboy of pilsner and eager to master the saxophone. His soul perhaps will be lost, but he will be a merry and a happy man. _________________________________________________________________ Yearning Mountaineers' Souls Need Reconversion Nightly, Mencken Finds by H.L. Mencken (The Baltimore Evening Sun, July 13, 1925) Dayton, Tenn., July 13. -- There is a Unitarian clergyman here from New York, trying desperately to horn into the trial and execution of the infidel Scopes. He will fail. If Darrow ventured to put him on the stand the whole audience, led by the jury, would leap out of the courthouse windows, and take to the hills. Darrow himself, indeed, is as much as they can bear. The whisper that he is an atheist has been stilled by the bucolic make-up and by the public report that he has the gift of prophecy and can reconcile Genesis and evolution. Even so, there is ample space about him when he navigates the streets. The other day a newspaper woman was warned by her landlady to keep out of the courtroom when he was on his legs. All the local sorcerers predict that a bolt from heaven will fetch him in the end. The night he arrived there was a violent storm, the town water turned brown, and horned cattle in the lowlands were afloat for hours. A woman back in the mountains gave birth to a child with hair four inches long, curiously bobbed in scallops. The Book of Revelation has all the authority, in these theological uplands, of military orders in time of war. The people turn to it for light upon all their problems, spiritual and secular. If a text were found in it denouncing the Anti-Evolution law, then the Anti-Evolution law would become infamous overnight. But so far the exegetes who roar and snuffle in the town have found no such text. Instead they have found only blazing ratifications and reinforcements of Genesis. Darwin is the devil with seven tails and nine horns. Scopes, though he is disguised by flannel pantaloons and a Beta Theta Pi haircut, is the harlot of Babylon. Darrow is Beelzebub in person and Malone is the Crown Prince Friedrich Wilhelm. I have hitherto hinted an Episcopalian down here in the coca-cola belt is regarded as an atheist. It sounds like one of the lies that journalists tell, but it is really an understatement of the facts. Even a Methodist, by Rhea county standards, is one a bit debauched by pride of intellect. It is the four Methodists on the jury who are expected to hold out for giving Scopes Christian burial after he is hanged. They all made it plain, when they were examined, that they were free-thinking and independent men, and not to be run amuck by the superstitions of the lowly. One actually confessed that he seldom read the Bible, though he hastened to add that he was familiar with its principles. The fellow had on a boiled shirt and a polka dot necktie. He sits somewhat apart. When Darrow withers to a cinder under the celestial blowpipe, this dubious Wesleyan, too, will lose a few hairs. Even the Baptists no longer brew a medicine that is strong enough for the mountaineers. The sacrament of baptism by total immersion is over too quickly for them, and what follows offers nothing that they can get their teeth into. What they crave is a continuous experience of the divine power, an endless series of evidence that the true believer is a marked man, ever under the eye of God. It is not enough to go to a revival once a year or twice a year; there must be a revival every night. And it is not enough to accept the truth as a mere statement of indisputable and awful fact: it must be embraced ecstatically and orgiastically, to the accompaniment of loud shouts, dreadful heavings and gurglings, and dancing with arms and legs. This craving is satisfied brilliantly by the gaudy practices of the Holy Rollers, and so the mountaineers are gradually gravitating toward the Holy Roller communion, or, as they prefer to call it, the Church of God. Gradually, perhaps, is not the word. They are actually going in by whole villages and townships. At the last count of noses there were 20,000 Holy Rollers in these hills. The next census, I have no doubt, will show many more. The cities of the lowlands, of course, still resist, and so do most of the county towns, including even Dayton, but once one steps off the State roads the howl of holiness is heard in the woods, and the yokels carry on an almost continuous orgy. A foreigner in store clothes going out from Dayton must approach the sacred grove somewhat discreetly. It is not that the Holy Rollers, discovering him, would harm him; it is simply that they would shut down their boiling of the devil and flee into the forests. We left Dayton an hour after nightfall and parked our car in a wood a mile or so beyond the little hill village of Morgantown. Far off in a glade a flickering light was visible and out of the_ he silence came a faint rumble of exhortation. We could scarcely distinguish the figure of the preacher; it was like looking down the tube of a dark field microscope. We got out of the car and sneaked along the edge of a mountain cornfield. Presently we were near enough to see what was going on. From the great limb of a mighty oak hung a couple of crude torches of the sort that car inspectors thrust under Pullman cars when a train pulls in at night. In their light was a preacher, and for a while we could see no one else. He was an immensely tall and thin mountaineer in blue jeans, his collarless shirt open at the neck and his hair a tousled mop. As he preached he paced up and down under the smoking flambeaux and at each turn he thrust his arms into the air and yelled, "Glory to God!" We crept nearer in the shadow of the cornfield and began to hear more of his discourse. He was preaching on the day of judgment. The high kings of the earth, he roared, would all fall down and die; only the sanctified would stand up to receive the Lord God of Hosts. One of these kings he mentioned by name -- the king of what he called Greece-y. The King of Greece-y, he said, was doomed to hell. We went forward a few more yards and began to see the audience. It was seated on benches ranged round the preacher in a circle. Behind him sat a row of elders, men and women. In front were the younger folk. We kept on cautiously, and individuals rose out of the ghostly gloom. A young mother sat suckling her baby, rocking as the preacher paced up and down. Two scared little girls hugged each other, their pigtails down their backs. An immensely huge mountain woman, in a gingham dress cut in one piece, rolled on her heels at every "Glory to God." To one side, but half visible, was what appeared to be a bed. We found out afterward that two babies were asleep upon it. The preacher stopped at last and there arose out of the darkness a woman with her hair pulled back into a little tight knot. She began so quietly that we couldn't hear what she said, but soon her voice rose resonantly and we could follow her. She was denouncing the reading of books. Some wandering book agent, it appeared, had come to her cabin and tried to sell her a specimen of his wares. She refused to touch it. Why, indeed, read a book? If what was in it was true then everything in it was already in the Bible. If it was false then reading it would imperil the soul. Her syllogism complete, she sat down. There followed a hymn, led by a somewhat fat brother wearing silver-rimmed country spectacles. It droned on for half a dozen stanzas, and then the first speaker resumed the floor. He argued that the gift of tongues was real and that education was a snare. Once his children could read the Bible, he said, they had enough. Beyond lay only infidelity and damnation. Sin stalked the cities. Dayton itself was a Sodom. Even Morgantown had begun to forget God. He sat down, and the female aurochs in gingham got up. She began quietly, but was soon leaping and roaring, and it was hard to follow her. Under cover of the turmoil we sneaked a bit closer. A couple of other discourses followed, and there were two or three hymns. Suddenly a change of mood began to make itself felt. The last hymn ran longer than the others and dropped gradually into a monotonous, unintelligible chant. The leader beat time with his book. The faithful broke out with exultations. When the singing ended there was a brief palaver that we could not hear and two of the men moved a bench into the circle of light directly under the flambeaux. Then a half-grown girl emerged from the darkness and threw herself upon it. We noticed with astonishment that she had bobbed hair. "This sister," said the leader, "has asked for prayers." We moved a bit closer. We could now see faces plainly and hear every word. What followed quickly reached such heights of barbaric grotesquerie that it was hard to believe it real. At a signal all the faithful crowded up the bench and began to pray -- not in unison but each for himself. At another they all fell on their knees, their arms over the penitent. The leader kneeled, facing us, his head alternately thrown back dramatically or buried in his hands. Words spouted from his lips like bullets from a machine gun -- appeals to God to pull the penitent back out of hell, defiances of the powers and principalities of the air, a vast impassioned jargon of apocalyptic texts. Suddenly he rose to his feet, threw back his head and began to speak in tongues -- blub-blub-blub, gurgle-gurgle-gurgle. His voice rose to a higher register. The climax was a shrill, inarticulate squawk, like that of a man throttled. He fell headlong across the pyramid of supplicants. A comic scene? Somehow, no. The poor half wits were too horribly in earnest. It was like peeping through a knothole at the writhings of a people in pain. From the squirming and jabbering mass a young woman gradually detached herself -- a woman not uncomely, with a pathetic home-made cap on her head. Her head jerked back, the veins of her neck swelled, and her fists went to her throat as if she were fighting for breath. She bent backward until she was like half of a hoop. Then she suddenly snapped forward. We caught a flash of the whites of her eyes. Presently her whole body began to be convulsed -- great convulsions that began at the shoulders and ended at the hips. She would leap to her feet, thrust her arms in air and then hurl herself upon the heap. Her praying flattened out into a mere delirious caterwauling, like that of a tomcat on a petting party. I describe the thing as a strict behaviorist. The lady's subjective sensations I leave to infidel pathologists. Whatever they were they were obviously contagious, for soon another damsel joined her, and then another and then a fourth. The last one had an extraordinary bad attack. She began with mild enough jerks of the head, but in a moment she was bounding all over the place, exactly like a chicken with its head cut off. Every time her head came up a stream of yells and barkings would issue out of it. Once she collided with a dark, undersized brother, hitherto silent and stolid. Contact with her set him off as if he had been kicked by a mule. He leaped into the air, threw back his head and began to gargle as if with a mouthful of BB shot. Then he loosened one tremendous stentorian sentence in the tongues and collapsed. By this time the performers were quite oblivious to the profane universe. We left our hiding and came up to the little circle of light. We slipped into the vacant seats on one of the rickety benches. The heap of mourners was directly before us. They bounced into us as they cavorted. The smell that they radiated, sweating there in that obscene heap, half suffocated us. Not all of them, of course, did the thing in the grand manner. Some merely moaned and rolled their eyes. The female ox in gingham flung her great bulk on the ground and jabbered an unintelligible prayer. One of the men, in the intervals between fits, put on spectacles and read his Bible. Beside me on the bench sat the young mother and her baby. She suckled it through the whole orgy, obviously fascinated by what was going on, but never venturing to take any hand in it. On the bed just outside the light two other babies slept peacefully. In the shadows, suddenly appearing and as suddenly going away, were vague figures, whether believers or of scoffers I do not know. They seemed to come and go in couples. Now and then a couple at the ringside would step back and then vanish into the black night. After a while some came back. There was whispering outside the circle of vision. A couple of Fords lurched up in the wood road, cutting holes in the darkness with their lights. Once some one out of sight loosed a bray of laughter. All this went on for an hour or so. The original penitent, by this time, was buried three deep beneath the heap. One caught a glimpse, now and then, of her yellow bobbed hair, but then she would vanish again. How she breathed down there I don't know; it was hard enough ten feet away, with a strong five-cent cigar to help. When the praying brothers would rise up for a bout with the tongues their faces were streaming with perspiration. The fat harridan in gingham sweated like a longshoreman. Her hair got loose and fell down over her face. She fanned herself with her skirt. A powerful old gal she was, equal in her day to obstetrics and a week's washing on the same morning, but this was worse than a week's washing. Finally, she fell into a heap, breathing in great, convulsive gasps. We tired of it after a while and groped our way back to our automobile. When we got to Dayton, after 11 o'clock -- an immensely late hour in these parts -- the whole town was still gathered on the courthouse lawn, hanging upon the disputes of theologians. The Bible champion of the world had a crowd. The Seventh Day Adventist missionaries had a crowd. A volunteer from faraway Portland, Ore., made up exactly like Andy Gump, had another and larger crowd. Dayton was enjoying itself. All the usual rules were suspended and the curfew bell was locked up. The prophet Bryan, exhausted by his day's work for Revelation, was snoring in his bed up the road, but enough volunteers were still on watch to keep the battlements manned. Such is human existence among the fundamentalists, where children are brought up on Genesis and sin is unknown. If I have made the tale too long, then blame the spirit of garrulity that is in the local air. Even newspaper reporters, down here, get some echo of the call. Divine inspiration is as common as the hookworm. I have done my best to show you what the great heritage of mankind comes to in regions where the Bible is the beginning and end of wisdom, and the mountebank Bryan, parading the streets in his seersucker coat, is pointed out to sucklings as the greatest man since Abraham. _________________________________________________________________ http://www.positiveatheism.org/hist/menck03.htm Darrow's Eloquent Appeal Wasted on Ears That Heed Only Bryan, Says Mencken by H.L. Mencken (The Baltimore Evening Sun, July 14, 1925) Dayton, Tenn., July 14. -- The net effect of Clarence Darrow's great speech yesterday seems to be precisely the same as if he had bawled it up a rainspout in the interior of Afghanistan. That is, locally, upon the process against the infidel Scopes, upon the so-called minds of these fundamentalists of upland Tennessee. You have but a dim notion of it who have only read it. It was not designed for reading, but for hearing. The clanging of it was as important as the logic. It rose like a wind and ended like a flourish of bugles. The very judge on the bench, toward the end of it, began to look uneasy. But the morons in the audience, when it was over, simply hissed it. During the whole time of its delivery the old mountebank, Bryan, sat tight-lipped and unmoved. There is, of course, no reason why it should have shaken him. He has those hill billies locked up in his pen and he knows it. His brand is on them. He is at home among them. Since his earliest days, indeed, his chief strength has been among the folk of remote hills and forlorn and lonely farms. Now with his political aspirations all gone to pot, he turns to them for religious consolations. They understand his peculiar imbecilities. His nonsense is their ideal of sense. When he deluges them with his theological bilge they rejoice like pilgrims disporting in the river Jordan. The town whisper is that the local attorney-general, Stewart, is not a fundamentalist, and hence has no stomach for his job. It seems not improbable. He is a man of evident education, and his argument yesterday was confined very strictly to the constitutional points -- the argument of a competent and conscientious lawyer, and to me, at least very persuasive. But Stewart, after all, is a foreigner here, almost as much so as Darrow or Hays or Malone. He is doing his job and that is all. The real animus of the prosecution centers in Bryan. He is the plaintiff and prosecutor. The local lawyers are simply bottle-holders for him. He will win the case, not by academic appeals to law and precedent, but by direct and powerful appeals to the immemorial fears and superstitions of man. It is no wonder that he is hot against Scopes. Five years of Scopes and even these mountaineers would begin to laugh at Bryan. Ten years and they would ride him out of town on a rail, with one Baptist parson in front of him and another behind. But there will be no ten years of Scopes, nor five years, nor even one year. Such brash young fellows, debauched by the enlightenment, must be disposed of before they become dangerous, and Bryan is here, with his tight lips and hard eyes, to see that this one is disposed of. The talk of the lawyers, even the magnificent talk of Darrow, is so much idle wind music. The case will not be decided by logic, nor even by eloquence. It will be decided by counting noses -- and for every nose in these hills that has ever thrust itself into any book save the Bible there are a hundred adorned with the brass ring of Bryan. These are his people. They understand him when he speaks in tongues. The same dark face that is in his own eyes is in theirs, too. They feel with him, and they relish him. I sincerely hope that the nobility and gentry of the lowlands will not make the colossal mistake of viewing this trial of Scopes as a trivial farce. Full of rustic japes and in bad taste, it is, to be sure, somewhat comic on the surface. One laughs to see lawyers sweat. The jury, marched down Broadway, would set New York by the ears. But all of that is only skin deep. Deeper down there are the beginnings of a struggle that may go on to melodrama of the first caliber, and when the curtain falls at least all the laughter may be coming from the yokels. You probably laughed at the prohibitionists, say, back in 1914. Well, don't make the same error twice. As I have said, Bryan understands these peasants, and they understand him. He is a bit mangey and flea-bitten, but no means ready for his harp. He may last five years, ten years or even longer. What he may accomplish in that time, seen here at close range, looms up immensely larger than it appears to a city man five hundred miles away. The fellow is full of such bitter, implacable hatreds that they radiate from him like heat from a stove. He hates the learning that he cannot grasp. He hates those who sneer at him. He hates, in general, all who stand apart from his own pathetic commonness. And the yokels hate with him, some of them almost as bitterly as he does himself. They are willing and eager to follow him -- and he has already given them a taste of blood. Darrow's peroration yesterday was interrupted by Judge Raulston, but the force of it got into the air nevertheless. This year it is a misdemeanor for a country school teacher to flout the archaic nonsense of Genesis. Next year it will be a felony. The year after the net will be spread wider. Pedagogues, after all, are small game; there are larger birds to snare -- larger and juicier. Bryan has his fishy eye on them. He will fetch them if his mind lasts, and the lamp holds out to burn. No man with a mouth like that ever lets go. Nor ever lacks followers. Tennessee is bearing the brunt of the first attack simply because the civilized minority, down here, is extraordinarily pusillanimous. I have met no educated man who is not ashamed of the ridicule that has fallen upon the State, and I have met none, save only judge Neal, who had the courage to speak out while it was yet time. No Tennessee counsel of any importance came into the case until yesterday and then they came in stepping very softly as if taking a brief for sense were a dangerous matter. When Bryan did his first rampaging here all these men were silent. They had known for years what was going on in the hills. They knew what the country preachers were preaching -- what degraded nonsense was being rammed and hammered into yokel skulls. But they were afraid to go out against the imposture while it was in the making, and when any outsider denounced it they fell upon him violently as an enemy of Tennessee. Now Tennessee is paying for that poltroonery. The State is smiling and beautiful, and of late it has begun to be rich. I know of no American city that is set in more lovely scenery than Chattanooga, or that has more charming homes. The civilized minority is as large here, I believe, as anywhere else. It has made a city of splendid material comforts and kept it in order. But it has neglected in the past the unpleasant business of following what was going on in the cross roads Little Bethels. The Baptist preachers ranted unchallenged. Their buffooneries were mistaken for humor. Now the clowns turn out to be armed, and have begun to shoot. In his argument yesterday judge Neal had to admit pathetically that it was hopeless to fight for a repeal of the anti-evolution law. The Legislature of Tennessee, like the Legislature of every other American state, is made up of cheap job-seekers and ignoramuses. The Governor of the State is a politician ten times cheaper and trashier. It is vain to look for relief from such men. If the State is to be saved at all, it must be saved by the courts. For one, I have little hope of relief in that direction, despite Hays' logic and Darrow's eloquence. Constitutions, in America, no longer mean what they say. To mention the Bill of Rights is to be damned as a Red. The rabble is in the saddle, and down here it makes its first campaign under a general beside whom Wat Tylor seems like a wart beside the Matterhorn. ------------------------ Law and Freedom, Mencken Discovers, Yield Place to Holy Writ in Rhea County by H.L. Mencken (The Baltimore Evening Sun, July 15, 1925) Dayton, Tenn., July 15. -- The cops have come up from Chattanooga to help save Dayton from the devil. Darrow, Malone and Hays, of course, are immune to constabulary process, despite their obscene attack upon prayer. But all other atheists and anarchists now have public notice they must shut up forthwith and stay shut so long as they pollute this bright, shining, buckle of the Bible belt with their presence. Only one avowed infidel has ventured to make a public address. The Chattanooga police nabbed him instantly, and he is now under surveillance in a hotel. Let him but drop one of his impious tracts from his window and he will be transferred to the town hoose-gow. The Constitution of Tennessee, as everyone knows, puts free speech among the most sacred rights of the citizen. More, I am informed by eminent Chattanooga counsel, that there is no State law denying it -- that is, for persons not pedagogues. But the cops of Chattanooga, like their brethren elsewhere, do not let constitutions stand in the way of their exercise of their lawful duty. The captain in charge of the squad now on watch told me frankly yesterday that he was not going to let any infidels discharge their damnable nonsense upon the town. I asked him what charge he would lay against them if they flouted him. He said he would jail them for disturbing the peace. "But suppose," I asked him, "a prisoner is actually not disturbing the peace. Suppose he is simply saying his say in a quiet and orderly manner." "I'll arrest him anyhow," said the cop. "Even if no one complains of him?" "I'll complain myself." "Under what law precisely?" "We don't need no law for them kind of people." It sounded like New York in the old days, before Mayor Gaynor took the constitution out of cold storage and began to belabor the gendarmerie with it. The captain admitted freely that speaking in the streets was not disturbing the peace so long as the speaker stuck to orthodox Christian doctrine as it is understood by the local exegetes. A preacher of any sect that admits the literal authenticity of Genesis is free to gather a crowd at any time and talk all he wants. More, he may engage in a disputation with any other expert. I have heard at least a hundred such discussions, and some of them have been very acrimonious. But the instant a speaker utters a word against divine revelation he begins to disturb the peace and is liable to immediate arrest and confinement in the calaboose beside the railroad tracks. Such is criminal law in Rhea county as interpreted by the uniformed and freely sweating agents. As I have said, there are legal authorities in Chattanooga who dissent sharply, and even argue that the cops are a set of numbskulls and ought to be locked up as public nuisances. But one need not live a long, incandescent week in the Bible belt to know that jurisprudence becomes a new science as one crosses the border. Here the ordinary statutes are reinforced by Holy Writ, and whenever there is a conflict Holy Writ takes precedence. Judge Raulston himself has decided, in effect, that in a trial for heresy it is perfectly fair and proper to begin proceedings with a prayer for the confutation and salvation of the defendant. On lower levels, and especially in the depths where policemen do their thinking, the doctrine is even more frankly stated. Before laying Christians by the heels the cops must formulate definite charges against them. They must be accused of something specifically unlawful and there must be witnesses to the act. But infidels are fera naturae, and any cop is free to bag at sight and to hold them in durance at his pleasure. To the same category, it appears, belong political and economic radicals. News came the other day to Pastor T.T. Martin, who is holding a continuous anti-evolution convention in the town, that a party of I.W.W.'s, their pockets full of Russian gold, had started out from Cincinnati to assassinate him. A bit later came word they would bump off Bryan after they had finished Martin, and then set fire to the town churches. Martin first warned Bryan and then complained to the police. The latter were instantly agog. Guards were posted at strategic centers and a watch was kept upon all strangers of a sinister appearance. But the I.W.W.'s were not caught. Yesterday Pastor Martin told me that he had news that they had gone back to Cincinnati to perfect the plot. He posts audiences at every meeting. If the Reds return they will be scotched. Arthur Garfield Hays, who is not only one of the counsel for the infidel Scopes but also agent and attorney of the notorious American Civil Liberties Union in New York, is planning to hold a free speech meeting on the Courthouse lawn and so make a test of the law against disturbing the peace as it is interpreted by the polizei. Hays will be well advertised if he carries out this subversive intention. It is hot enough in the courtroom in the glare of a thousand fundamentalist eyes; in the town jail he would sweat to death. Rhea county is very hospitable and, judged by Bible belt standards, very tolerant. The Dayton Babbitts gave a banquet to Darrow, despite the danger from lightning, meteors and earthquakes. Even Malone is treated politely, though the very horned cattle in the fields know that he is a Catholic and in constant communication with the Pope. But liberty is one thing and license is quite another. Within the bounds of Genesis the utmost play of opinion is permitted and even encouraged. An evangelist with a new scheme for getting into Heaven can get a crowd in two minutes. But once a speaker admits a doubt, however cautiously, he is handed over to the secular arm. Two Unitarian clergymen are prowling around the town looking for a chance to discharge their "hellish heresies." One of them is Potter, of New York; the other is Birckhead, of Kansas City. So far they have not made any progress. Potter induced one of the local Methodist parsons to give him a hearing, but the congregation protested and the next day the parson had to resign his charge. The Methodists, as I have previously reported, are regarded almost as infidels in Rhea county. Their doctrines, which seem somewhat severe in Baltimore, especially to persons who love a merry life, are here viewed as loose to the point of indecency. The four Methodists on the jury are suspected of being against hanging Scopes, at least without a fair trial. The State tried to get rid of one of them even after he had been passed; his neighbors had come in from his village with news that he had a banjo concealed in his house and was known to read the Literary Digest. The other Unitarian clergyman, Dr. Birckhead, is not actually domiciled in the town, but is encamped, with his wife and child, on the road outside. He is on an automobile tour and stopped off here to see if a chance offered to spread his "poisons." So far he has found none. Yesterday afternoon a Jewish rabbi from Nashville also showed up, Marks by name. He offered to read and expound Genesis in Hebrew, but found no takers. The Holy Rollers hereabout, when they are seized by the gift of tongues, avoid Hebrew, apparently as a result of Ku Klux influence. Their favorite among all the sacred dialects is Hittite. It sounds to the infidel like a series of college yells. Judge Raulston's decision yesterday afternoon in the matter of Hays' motion was a masterpiece of unconscious humor. The press stand, in fact, thought he was trying to be jocose deliberately and let off a guffaw that might have gone far if the roar of applause had not choked it off. Hays presented a petition in the name of the two Unitarians, the rabbi and several other theological "reds," praying that in selecting clergymen to open the court with prayer hereafter he choose fundamentalists and anti-fundamentalists alternately. The petition was couched in terms that greatly shocked and enraged the prosecution. When the judge announced that he would leave the nomination of chaplains to the Pastors' Association of the town there was the gust of mirth aforesaid, followed by howls of approval. The Pastors' Association of Dayton is composed of fundamentalists so powerfully orthodox that beside them such a fellow as Dr. John Roach Straton would seem an Ingersoll. The witnesses of the defense, all of them heretics, began to reach town yesterday and are all quartered at what is called the Mansion, an ancient and empty house outside the town limits, now crudely furnished with iron cots, spittoons, playing cards and the other camp equipment of scientists. Few, if any, of these witnesses will ever get a chance to outrage the jury with their blasphemies, but they are of much interest to the townspeople. The common belief is that they will be blown up with one mighty blast when the verdict of the twelve men, tried and true, is brought in, and Darrow, Malone, Hays and Neal with them. The country people avoid the Mansion. It is foolish to take unnecessary chances. Going into the courtroom, with Darrow standing there shamelessly and openly challenging the wrath of God, is risk enough. The case promises to drag into next week. The prosecution is fighting desperately and taking every advantage of its superior knowledge of the quirks of local procedure. The defense is heating up and there are few exchanges of courtroom amenities. There will be a lot of oratory before it is all over and some loud and raucous bawling otherwise, and maybe more than one challenge to step outside. The cards seem to be stacked against poor Scopes, but there may be a joker in the pack. Four of the jurymen, as everyone knows, are Methodists, and a Methodist down here belongs to the extreme wing of liberals. Beyond him lie only the justly and incurably damned. What if one of those Methodists, sweating under the dreadful pressure of fundamentalist influence, jumps into the air, cracks his heels together and gives a defiant yell? What if the jury is hung? It will be a good joke on the fundamentalists if it happens, and an even better joke on the defense. _________________________________________________________________ Mencken Declares Strictly Fair Trial Is Beyond Ken of Tennessee Fundamentalists by H.L. Mencken (The Baltimore Evening Sun, July 16, 1925) Dayton, Tenn., July 16. -- Two things ought to be understood clearly by heathen Northerners who follow the great cause of the State of Tennessee against the infidel Scopes. One is that the old mountebank, Bryan, is no longer thought of as a mere politician and jobseeker in these Godly regions, but has become converted into a great sacerdotal figure, half man and half archangel -- in brief, a sort of fundamentalist pope. The other is that the fundamentalist mind, running in a single rut for fifty years, is now quite unable to comprehend dissent from its basic superstitions, or to grant any common honesty, or even any decency, to those who reject them. The latter fact explains some of the most astonishing singularities of the present trial -- that is, singularities to one accustomed to more austere procedures. In the average Northern jurisdiction much of what is going on here would be almost unthinkable. Try to imagine a trial going on in a town in which anyone is free to denounce the defendant's case publicly and no one is free to argue for it in the same way -- a trial in a courthouse placarded with handbills set up by his opponents -- a trial before a jury of men who have been roweled and hammered by those opponents for years, and have never heard a clear and fair statement of his answer. But this is not all. It seems impossible, but it is nevertheless a fact that public opinion in Dayton sees no impropriety in the fact that the case was opened with prayer by a clergyman known by everyone to be against Scopes and by no means shy about making the fact clear. Nor by the fact that Bryan, the actual complainant, has been preparing the ground for the prosecution for months. Nor by the fact that, though he is one of the attorneys of record in the case, he is also present in the character of a public evangelist and that throngs go to hear him whenever he speaks, including even the sitting judge. I do not allege here that there is any disposition to resort to lynch law. On the contrary, I believe that there is every intent to give Scopes a fair trial, as a fair trial is understood among fundamentalists. All I desire to show is that all the primary assumptions are immovably against him -- that it is a sheer impossibility for nine-tenths of those he faces to see any merit whatever in his position. He is not simply one who has committed a misdemeanor against the peace and dignity of the State, he is also the agent of a heresy almost too hellish to be stated by reputable men. Such reputable men recognize their lawful duty to treat him humanely and even politely, but they also recognize their superior duty to make it plain that they are against his heresy and believe absolutely in the wisdom and virtue of his prosecutors. In view of the fact that everyone here looks for the jury to bring in a verdict of guilty, it might be expected that the prosecution would show a considerable amiability and allow the defense a rather free plan. Instead, it is contesting every point very vigorously and taking every advantage of its greatly superior familiarity with local procedure. There is, in fact, a considerable heat in the trial. Bryan and the local lawyers for the State sit glaring at the defense all day and even the Attorney General, A.T. Stewart, who is supposed to have secret doubts about fundamentalism, has shown such pugnacity that it has already brought him to forced apologies. The high point of yesterday's proceedings was reached with the appearance of Dr. Maynard M. Metcalfe, of the Johns Hopkins. The doctor is a somewhat chubby man of bland mien, and during the first part of his testimony, with the jury present, the prosecution apparently viewed him with great equanimity. But the instant he was asked a question bearing directly upon the case at bar there was a flurry in the Bryan pen and Stewart was on his feet with protests. Another question followed, with more and hotter protests. The judge then excluded the jury and the show began. What ensued was, on the surface, a harmless enough dialogue between Dr. Metcalfe and Darrow, but underneath there was very tense drama. At the first question Bryan came out from behind the State's table and planted himself directly in front of Dr. Metcalfe, and not ten feet away. The two McKenzies followed, with young Sue Hicks at their heels. Then began one of the clearest, most succinct and withal most eloquent presentations of the case for the evolutionists that I have ever heard. The doctor was never at a loss for a word, and his ideas flowed freely and smoothly. Darrow steered him magnificently. A word or two and he was howling down the wind. Another and he hauled up to discharge a broadside. There was no cocksureness in him. Instead he was rather cautious and deprecatory and sometimes he halted and confessed his ignorance. But what he got over before he finished was a superb counterblast to the fundamentalist buncombe. The jury, at least, in theory heard nothing of it, but it went whooping into the radio and it went banging into the face of Bryan. Bryan sat silent throughout the whole scene, his gaze fixed immovably on the witness. Now and then his face darkened and his eyes flashed, but he never uttered a sound. It was, to him, a string of blasphemies out of the devil's mass -- a dreadful series of assaults upon the only true religion. The old gladiator faced his real enemy at last. Here was a sworn agent and attorney of the science he hates and fears -- a well-fed, well-mannered spokesman of the knowledge he abominates. Somehow he reminded me pathetically of the old Holy Roller I heard last week -- the mountain pastor who damned education as a mocking and a corruption. Bryan, too, is afraid of it, for wherever it spreads his trade begins to fall off, and wherever it flourishes he is only a poor clown. But not to these fundamentalists of the hills. Not to yokels he now turns to for consolation in his old age, with the scars of defeat and disaster all over him. To these simple folk, as I have said, he is a prophet of the imperial line -- a lineal successor to Moses and Abraham. The barbaric cosmogony that he believes in seems as reasonable to them as it does to him. They share his peasant-like suspicion of all book learning that a plow hand cannot grasp. They believe with him that men who know too much should be seized by the secular arm and put down by force. They dream as he does of a world unanimously sure of Heaven and unanimously idiotic on this earth. This old buzzard, having failed to raise the mob against its rulers, now prepares to raise it against its teachers. He can never be the peasants' President, but there is still a chance to be the peasants' Pope. He leads a new crusade, his bald head glistening, his face streaming with sweat, his chest heaving beneath his rumpled alpaca coat. One somehow pities him, despite his so palpable imbecilities. It is a tragedy, indeed, to begin life as a hero and to end it as a buffoon. But let no one, laughing at him, underestimate the magic that lies in his black, malignant eye, his frayed but still eloquent voice. He can shake and inflame these poor ignoramuses as no other man among us can shake and inflame them, and he is desperately eager to order the charge. In Tennessee he is drilling his army. The big battles, he believes, will be fought elsewhere. _________________________________________________________________ http://www.positiveatheism.org/hist/menck04.htm Malone the Victor, Even Though Court Sides with Opponents, Says Mencken by H.L. Mencken (The Baltimore Evening Sun, July 17, 1925) Dayton, Tenn., July 17. -- Though the court decided against him this morning, and the testimony of the experts summoned for the defense will be banned out of the trial of the infidel Scopes, it was Dudley Field Malone who won yesterday's great battle of rhetoricians. When he got upon his legs it was the universal assumption in the courtroom that Judge Raulston's mind was already made up, and that nothing that any lawyer for the defense could say would shake him. But Malone unquestionably shook him. He was, at the end, in plain doubt, and he showed it by his questions. It took a night's repose to restore him to normalcy. The prosecution won, but it came within an inch of losing. Malone was put up to follow and dispose of Bryan, and he achieved the business magnificently. I doubt that any louder speech has ever been heard in a court of law since the days of Gog and Magog. It roared out of the open windows like the sound of artillery practice, and alarmed the moonshiners and catamounts on distant peaks. Trains thundering by on the nearby railroad sounded faint and far away and when, toward the end, a table covered with standing and gaping journalists gave way with a crash, the noise seemed, by contrast, to be no more than a pizzicato chord upon a viola da gamba. The yokels outside stuffed their Bibles into the loud-speaker horns and yielded themselves joyously to the impact of the original. In brief, Malone was in good voice. It was a great day for Ireland. And for the defense. For Malone not only out-yelled Bryan, he also plainly out-generaled and out-argued him. His speech, indeed, was one of the best presentations of the case against the fundamentalist rubbish that I have ever heard. It was simple in structure, it was clear in reasoning, and at its high points it was overwhelmingly eloquent. It was not long, but it covered the whole ground and it let off many a gaudy skyrocket, and so it conquered even the fundamentalists. At its end they gave it a tremendous cheer -- a cheer at least four times as hearty as that given to Bryan. For these rustics delight in speechifying, and know when it is good. The devil's logic cannot fetch them, but they are not above taking a voluptuous pleasure in his lascivious phrases. The whole speech was addressed to Bryan, and he sat through it in his usual posture, with his palm-leaf fan flapping energetically and his hard, cruel mouth shut tight. The old boy grows more and more pathetic. He has aged greatly during the past few years and begins to look elderly and enfeebled. All that remains of his old fire is now in his black eyes. They glitter like dark gems, and in their glitter there is immense and yet futile malignancy. That is all that is left of the Peerless Leader of thirty years ago. Once he had one leg in the White House and the nation trembled under his roars. Now he is a tinpot pope in the coca-cola belt and a brother to the forlorn pastors who belabor half-wits in galvanized iron tabernacles behind the railroad yards. His own speech was a grotesque performance and downright touching in its imbecility. Its climax came when he launched into a furious denunciation of the doctrine that man is a mammal. It seemed a sheer impossibility that any literate man should stand up in public and discharge any such nonsense. Yet the poor old fellow did it. Darrow stared incredulous. Malone sat with his mouth wide open. Hays indulged himself one of his sardonic chuckles. Stewart and Bryan fils looked extremely uneasy, but the old mountebank ranted on. To call a man a mammal, it appeared, was to flout the revelation of God. The certain effect of the doctrine would be to destroy morality and promote infidelity. The defense let it pass. The lily needed no gilding. There followed some ranting about the Leopold-Loeb case, culminating in the argument that learning was corrupting -- that the colleges by setting science above Genesis were turning their students into murderers. Bryan alleged that Darrow had admitted the fact in his closing speech at the Leopold-Loeb trial, and stopped to search for the passage in a printed copy of the speech. Darrow denied making any such statement, and presently began reading what he actually had said on the subject. Bryan then proceeded to denounce Nietzsche, whom he described as an admirer and follower of Darwin. Darrow challenged the fact and offered to expound what Nietzsche really taught. Bryan waved him off. The effect of the whole harangue was extremely depressing. It quickly ceased to be an argument addressed to the court -- Bryan, in fact, constantly said "My friends" instead of "Your Honor" -- and became a sermon at the camp-meeting. All the familiar contentions of the Dayton divines appeared in it -- that learning is dangerous, that nothing is true that is not in the Bible, that a yokel who goes to church regularly knows more than any scientist ever heard of. The thing went to fantastic lengths. It became a farrago of puerilities without coherence or sense. I don't think the old man did himself justice. He was in poor voice and his mind seemed to wander. There was far too much hatred in him for him to be persuasive. The crowd, of course, was with him. It has been fed upon just such balderdash for years. Its pastors assault it twice a week with precisely the same nonsense. It is chronically in the position of a populace protected by an espionage act In time of war. That is to say, it is forbidden to laugh at the arguments of one side and forbidden to hear the case of the other side. Bryan has been roving around in the tall grass for years and he knows the bucolic mind. He knows how to reach and inflame its basic delusions and superstitions. He has taken them into his own stock and adorned them with fresh absurdities. Today he may well stand as the archetype of the American rustic. His theology is simply the elemental magic that is preached in a hundred thousand rural churches fifty-two times a year. These Tennessee mountaineers are not more stupid than the city proletariat; they are only less informed. If Darrow, Malone and Hays could make a month's stumping tour in Rhea county I believe that fully a fourth of the population would repudiate fundamentalism, and that not a few of the clergy now in practice would be restored to their old jobs on the railroad. Malone's speech yesterday probably shook a great many true believers; another like it would fetch more than one of them. But the chances are heavily against them ever hearing a second. Once this trial is over, the darkness will close in again, and it will take long years of diligent and thankless effort to dispel it -- if, indeed, it is ever dispelled at all. With a few brilliant exceptions -- Dr. Neal is an example -- the more civilized Tennesseeans show few signs of being equal to the job. I suspect that politics is what keeps them silent and makes their State ridiculous. Most of them seem to be candidates for office, and a candidate for office, if he would get the votes of fundamentalists, must bawl for Genesis before he begins to bawl for anything else. A typical Tennessee politician is the Governor, Austin Peay. He signed the anti-evolution bill with loud hosannas, and he is now making every effort to turn the excitement of the Scopes trial to his private political uses. The local papers print a telegram that he has sent to Attorney-General A.T. Stewart whooping for prayer. In the North a Governor who indulged in such monkey shines would be rebuked for trying to influence the conduct of a case in court. And he would be derided as a cheap mountebank. But not here. I described Stewart the other day as a man of apparent education and sense and palpably superior to the village lawyers who sit with him at the trial table. I still believe that I described him accurately. Yet even Stewart toward the close of yesterday's session gave an exhibition that would be almost unimaginable in the North. He began his reply to Malone with an intelligent and forceful legal argument, with plenty of evidence of hard study in it. But presently he slid into a violent theological harangue, full of extravagant nonsense. He described the case as a combat between light and darkness and almost descended to the depths of Bryan. Hays challenged him with a question. Didn't he admit, after all, that the defense had a tolerable case; that it ought to be given a chance to present its evidence? I transcribe his reply literally: "That which strikes at the very foundations of Christianity is not entitled to a chance." Hays, plainly astounded by this bald statement of the fundamentalist view of due process, pressed the point. Assuming that the defense would present, not opinion but only unadorned fact, would Stewart still object to its admission? He replied. "Personally, yes." "But as a lawyer and Attorney-General?" insisted Hays. "As a lawyer and Attorney-General," said Stewart, "I am the same man." Such is justice where Genesis is the first and greatest of law books and heresy is still a crime. _________________________________________________________________ Battle Now Over, Mencken Sees; Genesis Triumphant and Ready for New Jousts by H.L. Mencken (The Baltimore Evening Sun, July 18, 1925) Dayton, Tenn., July 18. -- All that remains of the great cause of the State of Tennessee against the infidel Scopes is the formal business of bumping off the defendant. There may be some legal jousting on Monday and some gaudy oratory on Tuesday, but the main battle is over, with Genesis completely triumphant. Judge Raulston finished the benign business yesterday morning by leaping with soft judicial hosannas into the arms of the prosecution. The sole commentary of the sardonic Darrow consisted of bringing down a metaphorical custard pie upon the occiput of the learned jurist. "I hope," said the latter nervously, "that counsel intends no reflection upon this court." Darrow hunched his shoulders and looked out of the window dreamily. "Your honor," he said, "is, of course, entitled to hope." No doubt the case will be long and fondly remembered by connoisseurs of judicial delicatessen -- that is, as the performances of Weber and Fields are remembered by students of dramatic science. In immediate retrospect, it grows more fantastic and exhilarating. Scopes has had precisely the same fair trial that the Hon. John Philip Hill, accused of bootlegging on the oath of Howard A. Kelly, would have before the Rev. Dr. George W. Crabbe. He is a fellow not without humor; I find him full of smiles today. On some near tomorrow the Sheriff will collect a month's wages from him, but he has certainly had a lot of fun. More interesting than the hollow buffoonery that remains will be the effect upon the people of Tennessee, the actual prisoners at the bar. That the more civilized of them are in a highly feverish condition of mind must be patent to every visitor. The guffaws that roll in from all sides give them great pain. They are full of bitter protests and valiant projects. They prepare, it appears, to organize, hoist the black flag and offer the fundamentalists of the dung-hills a battle to the death. They will not cease until the last Baptist preacher is in flight over the mountains, and the ordinary intellectual decencies of Christendom are triumphantly restored. With the best will in the world I find it impossible to accept this tall talk with anything resembling confidence. The intelligentsia of Tennessee had their chance and let it get away from them. When the old mountebank, Bryan, first invaded the State with his balderdash they were unanimously silent. When he began to round up converts in the back country they offered him no challenge. When the Legislature passed the anti-evolution bill and the Governor signed it, they contented themselves with murmuring pianissimo. And when the battle was joined at last and the time came for rough stuff only one Tennesseean of any consequence volunteered. That lone volunteer was Dr. John Neal, now of counsel for the defense, a good lawyer and an honest man. His services to Darrow, Malone and Hays have been very valuable and they come out of the case with high respect for him. But how does Tennessee regard him? My impression is that Tennessee vastly underestimating incredibly that a farmer who read the Bible knew more than any scientist in the world. Such dreadful bilge, heard of far away, may seem only ridiculous. But it takes on a different smack, I assure you, when one hears it discharged formally in a court of law and sees it accepted as wisdom by judge and jury. Darrow has lost this case. It was lost long before he came to Dayton. But it seems to me that he has nevertheless performed a great public service by fighting it to a finish and in a perfectly serious way. Let no one mistake it for comedy, farcical though it may be in all its details. It serves notice on the country that Neanderthal man is organizing in these forlorn backwaters of the land, led by a fanatic, rid sense and devoid of conscience. Tennessee, challenging him too timorously and too late, now sees its courts converted into camp meetings and its Bill of Rights made a mock of by its sworn officers of the law. There are other States that had better look to their arsenals before the Hun is at their gates. _________________________________________________________________ Tennessee in the Frying Pan by H.L. Mencken (The Baltimore Evening Sun, July 20, 1925) I That the rising town of Dayton, when it put the infidel Scopes on trial, bit off far more than it has been able to chew -- this melancholy fact must now be evident to everyone. The village Aristides Sophocles Goldsboroughs believed that the trial would bring in a lot of money, and produce a vast mass of free and profitable advertising. They were wrong on both counts, as boomers usually are. Very little money was actually spent by the visitors: the adjacent yokels brought their own lunches and went home to sleep, and the city men from afar rushed down to Chattanooga whenever there was a lull. As for the advertising that went out over the leased wires, I greatly fear that it has quite ruined the town. When people recall it hereafter they will think of it as they think of Herrin, Ill., and Homestead, Pa. It will be a joke town at best, and infamous at worst. The natives reacted to this advertising very badly. The preliminary publicity, I believe, had somehow disarmed and deceived them. It was mainly amiable spoofing; they took it philosophically, assured by the local Aristideses that it was good for trade. But when the main guard of Eastern and Northern journalists swarmed down, and their dispatches began to show the country and the world exactly how the obscene buffoonery appeared to realistic city men, then the yokels began to sweat coldly, and in a few days they were full of terror and indignation. Some of the bolder spirits, indeed, talked gaudily of direct action against the authors of the "libels." But the history of the Ku Klux and the American Legion offers overwhelmingly evidence that 100 per cent Americans never fight when the enemy is in strength, and able to make a defense, so the visitors suffered nothing worse than black, black looks. When the last of them departs Daytonians will disinfect the town with sulphur candles, and the local pastors will exorcise the devils that they left behind them. II Dayton, of course, is only a ninth-rate country town, and so its agonies are of relatively little interest to the world. Its pastors, I daresay, will be able to console it, and if they fail there is always the old mountebank, Bryan, to give a hand. Faith cannot only move mountains; it can also soothe the distressed spirits of mountaineers. The Daytonians, unshaken by Darrow's ribaldries, still believe. They believe that they are not mammals. They believe, on Bryan's word, that they know more than all the men of science of Christendom. They believe, on the authority of Genesis, that the earth is flat and that witches still infest it. They believe, finally and especially, that all who doubt these great facts of revelation will go to hell. So they are consoled. But what of the rest of the people of Tennessee? I greatly fear that they will not attain to consolation so easily. They are an extremely agreeable folk, and many of them are highly intelligent. I met men and women -- particularly women -- in Chattanooga who showed every sign of the highest culture. They led civilized lives, despite Prohibition, and they were interested in civilized ideas, despite the fog of Fundamentalism in which they moved. I met members of the State judiciary who were as heartily ashamed of the bucolic ass, Raulston, as an Osler would be of a chiropractor. I add the educated clergy: Episcopalians, Unitarians, Jews and so on -- enlightened men, tossing pathetically under the imbecilities of their evangelical colleagues. Chattanooga, as I found it, was charming, but immensely unhappy. What its people ask for -- many of them in plain terms -- is suspended judgment, sympathy, Christian charity, and I believe that they deserve all these things. Dayton may be typical of Tennessee, but it is surely not all of Tennessee. The civilized minority in the State is probably as large as in any other Southern State. What ails it is simply the fact it has been, in the past, too cautious and politic -- that it has been too reluctant to offend the Fundamentalist majority. To that reluctance something else has been added: an uncritical and somewhat childish local patriotism. The Tennesseeans have tolerated their imbeciles for fear that attacking them would bring down the derision of the rest of the country. Now they have the derision, and to excess -- and the attack is ten times as difficult as it ever was before. III How they are to fight their way out of their wallow I do not know. They begin the battle with the enemy in command of every height and every gun; worse, there is a great deal of irresolution in their own ranks. The newspapers of the State, with few exceptions, are very feeble. One of the best of them, the Chattanooga News, set up an eloquent whooping for Bryan the moment he got to Dayton. Before that it had been against the anti-evolution law. But with the actual battle joined, it began to wobble, and presently it was printing articles arguing that Fundamentalism, after all, made men happy -- that a Tennesseean gained something valuable by being an ignoramus -- in other words, that a hog in a barnyard was to be envied by an Aristotle. The News was far better than most: it gave space, too, to the other side, and at considerable risk. But its weight, for two weeks, was thrown heavily to Bryan and his balderdash. The pusillanimous attitude of the bar of the State I described in my dispatches from Dayton. It was not until the trial was two days old that any Tennessee lawyers of influence and dignity went to the aid of Dr. John R. Neal -- and even then all of the volunteers enlisted only on condition that their names be kept out of the newspapers. I should except one T.B. McElwee. He sat at the trial table and rendered valuable services. The rest lurked in the background. It was an astounding situation to a Marylander, but it seemed to be regarded as quite natural in Tennessee. The prevailing attitude toward Neal himself was also very amazing. He is an able lawyer and a man of repute, and in any Northern State his courage would get the praise it deserves. But in Tennessee even the intelligentsia seem to feet that he has done something discreditable by sitting at the trial table with Darrow, Hays and Malone. The State buzzes with trivial, idiotic gossip about him -- that he dresses shabbily, that he has political aspirations, and so on. What if he does and has? He has carried himself, in this case, in a way that does higher credit to his native State. But his native State, instead of being proud of him, simply snarls at him behind his back. IV So with every other man concerned with the defense -- most of them, slackaday, foreigners. For example, Rappelyea, the Dayton engineer who was first to go to the aid of Scopes. I was told solemnly in Dayton, not once but twenty times, that Rappelyea was (a) a Bowery boy from New York, and (b) an incompetent and ignorant engineer. I went to some trouble to unearth the facts. They were (a) that he was actually a member of one of the oldest Huguenot families in America, and (b) that his professional skill and general culture were such that the visiting scientists sought him out and found pleasure in his company. Such is the punishment that falls upon a civilized man cast among fundamentalists. As I have said, the worst of it is that even the native intelligentsia help to pull the rope. In consequence all the brighter young men of the State -- and it produces plenty of them -- tend to leave it. If they remain, they must be prepared to succumb to the prevailing blather or resign themselves to being more or less infamous. With the anti-evolution law enforced, the State university will rapidly go to pot; no intelligent youth will waste his time upon its courses if he can help it. And so, with the young men lost, the struggle against darkness will become almost hopeless. As I have said, the State still produces plenty of likely young bucks -- if only it could hold them! There is good blood everywhere, even in the mountains. During the dreadful buffooneries of Bryan and Raulston last week two typical specimens sat at the press table. One was Paul Y. Anderson, correspondent of the St. Louis Post-Dispatch, and the other was Joseph Wood Krutch, one of the editors of the Nation. I am very familiar with the work of both of them, and it is my professional judgment that it is of the first caliber. Anderson is one of the best newspaper reporters in America and Krutch is one of the best editorial writers. Well, both were there as foreigners. Both were working for papers that could not exist in Tennessee. Both were viewed by their fellow Tennesseeans not with pride, as credits to the State, but as traitors to the Tennessee Kultur and public enemies. Their crime was that they were intelligent men, doing their jobs intelligently. _________________________________________________________________ http://www.positiveatheism.org/hist/menck05.htm#SCOPESC Bryan by H.L. Mencken (The Baltimore Evening Sun, July 27, 1925) I It was plain to everyone, when Bryan came to Dayton, that his great days were behind him -- that he was now definitely an old man, and headed at last for silence. There was a vague, unpleasant manginess about his appearance; he somehow seemed dirty, though a close glance showed him carefully shaved, and clad in immaculate linen. All the hair was gone from the dome of his head, and it had begun to fall out, too, behind his ears, like that of the late Samuel Gompers. The old resonance had departed from his voice: what was once a bugle blast had become reedy and quavering. Who knows that, like Demosthenes, he had a lisp? In his prime, under the magic of his eloquence, no one noticed it. But when he spoke at Dayton it was always audible. When I first encountered him, on the sidewalk in front of the Hicks brothers law office, the trial was yet to begin, and so he was still expansive and amiable. I had printed in the Nation, a week or so before, an article arguing that the anti-evolution law, whatever its unwisdom, was at least constitutional -- that policing school teachers was certainly not putting down free speech. The old boy professed to be delighted with the argument, and gave the gaping bystanders to understand that I was a talented publicist. In turn I admired the curious shirt he wore -- sleeveless and with the neck cut very low. We parted in the manner of two Spanish ambassadors. But that was the last touch of affability that I was destined to see in Bryan. The next day the battle joined and his face became hard. By the end of the first week he was simply a walking malignancy. Hour by hour he grew more bitter. What the Christian Scientists call malicious animal magnetism seemed to radiate from him like heat from a stove. From my place in the court-room, standing upon a table, I looked directly down upon him, sweating horribly and pumping his palm-leaf fan. His eyes fascinated me: I watched them all day long. They were blazing points of hatred. They glittered like occult and sinister gems. Now and then they wandered to me, and I got my share. It was like coming under fire. II What was behind that consuming hatred? At first I thought that it was mere evangelical passion. Evangelical Christianity, as everyone knows, is founded upon hate, as the Christianity of Christ was founded upon love. But even evangelical Christians occasionally loose their belts and belch amicably; I have known some who, off duty, were very benignant. In that very courtroom, indeed, were some of them -- for example, old Ben McKenzie, Nestor of the Dayton bar, who sat beside Bryan. Ben was full of good humor. He made jokes with Darrow. But Bryan only glared. One day it dawned on me that Bryan, after all, was an evangelical Christian only by sort of afterthought -- that his career in this world, and the glories thereof, had actually come to an end before he ever began whooping for Genesis. So I came to this conclusion: that what really moved him was a lust for revenge. The men of the cities had destroyed him and made a mock of him; now he would lead the yokels against them. Various facts clicked into the theory, and I hold it still. The hatred in the old man's burning eyes was not for the enemies of God; it was for the enemies of Bryan. Thus he fought his last fight, eager only for blood. It quickly became frenzied and preposterous, and after that pathetic. All sense departed from him. He bit right and left, like a dog with rabies. He descended to demagogy so dreadful that his very associates blushed. His one yearning was to keep his yokels heated up -- to lead his forlorn mob against the foe. That foe, alas, refused to be alarmed. It insisted upon seeing the battle as a comedy. Even Darrow, who knew better, occasionally yielded to the prevailing spirit. Finally, he lured poor Bryan into a folly almost incredible. I allude to his astounding argument against the notion that man is a mammal. I am glad I heard it, for otherwise I'd never believe it. There stood the man who had been thrice a candidate for the Presidency of the Republic -- and once, I believe, elected -- there he stood in the glare of the world, uttering stuff that a boy of eight would laugh at! The artful Darrow led him on: he repeated it, ranted for it, bellowed it in his cracked voice. A tragedy, indeed! He came into life a hero, a Galahad, in bright and shining armor. Now he was passing out a pathetic fool. III Worse, I believe that he somehow sensed the fact -- that he realized his personal failure, whatever the success of the grotesque cause he spoke for. I had left Dayton before Darrow's cross-examination brought him to his final absurdity, but I heard his long speech against the admission of expert testimony, and I saw how it fell flat and how Bryan himself was conscious of the fact. When he sat down he was done for, and he knew it. The old magic had failed to work; there was applause but there was no exultant shouts. When, half an hour later, Dudley Field Malone delivered his terrific philippic, the very yokels gave him five times the clapper-clawing that they had given to Bryan. This combat was the old leader's last, and it symbolized in more than one way his passing. Two women sat through it, the one old and crippled, the other young and in the full flush of beauty. The first was Mrs. Bryan; the second was Mrs. Malone. When Malone finished his speech the crowd stormed his wife with felicitations, and she glowed as only a woman can who has seen her man fight a hard fight and win gloriously. But no one congratulated Mrs. Bryan. She sat hunched in her chair near the judge, apparently very uneasy. I thought then that she was ill -- she has been making the round of sanitariums for years, and was lately in the hands of a faith-healer -- but now I think that some appalling prescience was upon her, and that she saw in Bryan's eyes a hint of the collapse that was so near. He sank into his seat a wreck, and was presently forgotten in the blast of Malone's titanic rhetoric. His speech had been maundering feeble and often downright idiotic. Presumably, he was speaking to a point of law, but it was quickly apparent that he knew no more law than the bailiff at the door. So he launched into mere violet garrulity. He dragged in snatches of ancient chautauqua addresses; he wandered up hill and down dale. Finally, Darrow lured him into that fabulous imbecility about man as a mammal. He sat down one of the most tragic asses in American history. IV It is the national custom to sentimentalize the dead, as it is to sentimentalize men about to be hanged. Perhaps I fall into that weakness here. The Bryan I shall remember is the Bryan of his last weeks on earth -- broken, furious, and infinitely pathetic. It was impossible to meet his hatred with hatred to match it. He was winning a battle that would make him forever infamous wherever enlightened men remembered it and him. Even his old enemy, Darrow, was gentle with him at the end. That cross-examination might have been ten times as devastating. It was plain to everyone that the old Berseker Bryan was gone -- that all that remained of him was a pair of glaring and horrible eyes. But what of his life? Did he accomplish any useful thing? Was he, in his day, of any dignity as a man, and of any value to his fellow-men? I doubt it. Bryan, at his best, was simply a magnificent job-seeker. The issues that he bawled about usually meant nothing to him. He was ready to abandon them whenever he could make votes by doing so, and to take up new ones at a moment's notice. For years he evaded Prohibition as dangerous; then he embraced it as profitable. At the Democratic National Convention last year he was on both sides, and distrusted by both. In his last great battle there was only a baleful and ridiculous malignancy. If he was pathetic, he was also disgusting. Bryan was a vulgar and common man, a cad undiluted. He was ignorant, bigoted, self-seeking, blatant and dishonest. His career brought him into contact with the first men of his time; he preferred the company of rustic ignoramuses. It was hard to believe, watching him at Dayton, that he had traveled, that he had been received in civilized societies, that he had been a high officer of state. He seemed only a poor clod like those around him, deluded by a childish theology, full of an almost pathological hatred of all learning, all human dignity, all beauty, all fine and noble things. He was a peasant come home to the dung-pile. Imagine a gentleman, and you have imagined everything that he was not. The job before democracy is to get rid of such canaille. If it fails, they will devour it. _________________________________________________________________ Aftermath by H.L. Mencken (The Baltimore Evening Sun, September 14, 1925) I The Liberals, in their continuing discussion of the late trial of the infidel Scopes at Dayton, Tenn., run true to form. That is to say, they show all their habitual lack of humor and all their customary furtive weakness for the delusions of Homo neanderthalensis. I point to two of their most enlightened organs: the eminent New York World and the gifted New Republic. The World is displeased with Mr. Darrow because, in his appalling cross-examination of the mountebank Bryan, he did some violence to the theological superstitions that millions of Americans cherish. The New Republic denounces him because he addressed himself, not to "the people of Tennessee" but to the whole country, and because he should have permitted "local lawyers" to assume "the most conspicuous position in the trial." Once more, alas, I find myself unable to follow the best Liberal thought. What the World's contention amounts to, at bottom, is simply the doctrine that a man engaged in combat with superstition should be very polite to superstition. This, I fear, is nonsense. The way to deal with superstition is not to be polite to it, but to tackle it with all arms, and so rout it, cripple it, and make it forever infamous and ridiculous. Is it, perchance, cherished by persons who should know better? Then their folly should be brought out into the light of day, and exhibited there in all its hideousness until they flee from it, hiding their heads in shame. True enough, even a superstitious man has certain inalienable rights. He has a right to harbor and indulge his imbecilities as long as he pleases, provided only he does not try to inflict them upon other men by force. He has a right to argue for them as eloquently as he can, in season and out of season. He has a right to teach them to his children. But certainly he has no right to be protected against the free criticism of those who do not hold them. He has no right to demand that they be treated as sacred. He has no right to preach them without challenge. Did Darrow, in the course of his dreadful bombardment of Bryan, drop a few shells, incidentally, into measurably cleaner camps? Then let the garrisons of those camps look to their defenses. They are free to shoot back. But they can't disarm their enemy. II The meaning of religious freedom, I fear, is sometimes greatly misapprehended. It is taken to be a sort of immunity, not merely from governmental control but also from public opinion. A dunderhead gets himself a long-tailed coat, rises behind the sacred desk, and emits such bilge as would gag a Hottentot. Is it to pass unchallenged? If so, then what we have is not religious freedom at all, but the most intolerable and outrageous variety of religious despotism. Any fool, once he is admitted to holy orders, becomes infallible. Any half-wit, by the simple device of ascribing his delusions to revelation, takes on an authority that is denied to all the rest of us. I do not know how many Americans entertain the ideas defended so ineptly by poor Bryan, but probably the number is very large. They are preached once a week in at least a hundred thousand rural churches, and they are heard too in the meaner quarters of the great cities. Nevertheless, though they are thus held to be sound by millions, these ideas remain mere rubbish. Not only are they not supported by the known facts; they are in direct contravention of the known facts. No man whose information is sound and whose mind functions normally can conceivably credit them. They are the products of ignorance and stupidity, either or both. What should be a civilized man's attitude toward such superstitions? It seems to me that the only attitude possible to him is one of contempt. If he admits that they have any intellectual dignity whatever, he admits that he himself has none. If he pretends to a respect for those who believe in them, he pretends falsely, and sinks almost to their level. When he is challenged he must answer honestly, regardless of tender feelings. That is what Darrow did at Dayton, and the issue plainly justified the act. Bryan went there in a hero's shining armor, bent deliberately upon a gross crime against sense. He came out a wrecked and preposterous charlatan, his tail between his legs. Few Americans have ever done so much for their country in a whole lifetime as Darrow did in two hours. III The caveat of the New Republic is so absurd that it scarcely deserves an answer. It is based upon a complete misunderstanding of the situation that the Scopes trial revealed. What good would it have done to have addressed an appeal to the people of Tennessee? They had already, by their lawful representatives, adopted the anti-evolution statute by an immense majority, and they were plainly determined to uphold it. The newspapers of the State, with one or two exceptions, were violently in favor of the prosecution, and applauded every effort of the rustic judge and district attorney to deprive the defense of its most elemental rights. True enough, there was a minority of Tennesseeans on the other side -- men and women who felt keenly the disgrace of their State, and were eager to put an end to it. But their time had passed; they had missed their chance. They should have stepped forward at the very beginning, long before Darrow got into the case. Instead, they hung back timorously, and so Bryan and the Baptist pastors ran amok. There was a brilliant exception: John R. Neal. There was another: T.R. Elwell. Both lawyers. But the rest of the lawyers of the State, when the issue was joined at last, actually helped the prosecution. Their bar associations kept up a continuous fusillade. They tried their best to prod the backwoods Dogberry, Raulston, into putting Darrow into jail. There was but one way to meet this situation and Darrow adopted it. He appealed directly to the country and to the world. He had at these recreant Tennesseeans by exhibiting their shame to all men, near and far. He showed them cringing before the rustic theologians, and afraid of Bryan. He turned the State inside out, and showed what civilization can come to under Fundamentalism. The effects of that cruel exposure are now visible. Tennessee is still spluttering -- and blushing. The uproar staggered its people. And they are doing some very painful thinking. Will they cling to Fundamentalism or will they restore civilization? I suspect that the quick decision of their neighbor, Georgia, will help them to choose. Darrow did more for them, in two weeks, than all their pastors and politicians had done since the Civil War. IV His conduct of the case, in fact, was adept and intelligent from beginning to end. It is hard, in retrospect, to imagine him improving it. He faced immense technical difficulties. In order to get out of the clutches of the village Dogberry and before judges of greater intelligence he had to work deliberately for the conviction of his client. In order to evade the puerile question of that client's guilt or innocence and so bring the underlying issues before the country, he had to set up a sham battle on the side lines. And in order to expose the gross ignorance and superstition of the real prosecutor, Bryan, he had to lure the old imposter upon the stand. It seems to me that he accomplished all of these things with great skill. Scopes was duly convicted, and the constitutional questions involved in the law will now be heard by competent judges and decided without resort to prayer and moving pictures. The whole world has been made familiar with the issues, and the nature of the menace that Fundamentalism offers to civilization is now familiar to every schoolboy. And Bryan was duly scotched, and, if he had lived, would be standing before the country today as a comic figure, tattered and preposterous. All this was accomplished, in infernal weather, by a man of sixty-eight, with the scars of battles all over him. He had, to be sure, highly competent help. At his table sat lawyers whose peculiar talents, in combination, were of the highest potency -- the brilliant Hays, the eloquent Malone, the daring and patriotic Tennesseean, Neal. But it was Darrow who carried the main burden, and Darrow who shaped the final result. When he confronted Bryan at last, the whole combat came to its climax. On the one side was bigotry, ignorance, hatred, superstition, every sort of blackness that the human mind is capable of. On the other side was sense. And sense achieved a great victory. From checker at panix.com Thu Jul 21 20:51:57 2005 From: checker at panix.com (Premise Checker) Date: Thu, 21 Jul 2005 16:51:57 -0400 (EDT) Subject: [Paleopsych] Pol.Sci.Q: Albert B. Knapp: The HBV and HCV Pandemics: Health, Political, and Security Challenges Message-ID: Albert B. Knapp: The HBV and HCV Pandemics: Health, Political, and Security Challenges Political Science Quarterly 120.2 (2005): 243-251 [First, the summary from CHE, 5.7.21 The Chronicle of Higher Education: Magazine & journal reader http://chronicle.com/prm/daily/2005/07/2005072101j.htm A glance at the summer issue of Political Science Quarterly: Ignoring the dangers of hepatitis B and C Hepatitis B and hepatitis C infect nearly 2 billion people worldwide, but HIV, which affects 30 million individuals, gets much more attention, writes Albert B. Knapp, an associate clinical professor of medicine at New York University. If officials do not try harder to curb the spread of hepatitis, it could increasingly harm underdeveloped nations, which do not do as comprehensive a job of preventing and treating the virus as Western countries do, writes Dr. Knapp. Both hepatitis B and hepatitis C, if left untreated, cause chronic liver infection, which could result in death. Hepatitis C, for which no vaccine exists, is more difficult to treat than hepatitis B, which does have a vaccine. "If left unaddressed, by the end of the decade, these viruses could pose a greater danger than the HIV epidemic, not only as a public-health problem, but for the alarming political and security threats they present," writes Dr. Knapp. Hepatitis could weaken developing nations politically and financially by diverting money from economic development and cutting the size of their work force and military, Dr. Knapp writes. Those countries should try to raise public awareness about hepatitis, he says. Western countries and large, multinational pharmaceutical companies should develop and donate cheaper therapies to underdeveloped nations to reduce the viruses' impact there, Dr. Knapp writes. They should also increase efforts to make a hepatitis-C vaccine and a longer-lasting hepatitis-B vaccine. "What we lack ... is both a clear political focus and a collective political will, without which the consequences for everyone may be catastrophic," Dr. Knapp writes. The article, "The HBV and HCV Pandemics: Health, Political, and Security Challenges," is available online to subscribers and for purchase at [54]http://www.psqonline.org --Jamie Schuman [I do not know why Adobe "Professional" omitted spaces between word in part of what follows.] ALBERT B. KNAPP is an associate clinical professor of Medicine (Gastroenterology & Hepatology) at the New York University Medical School as well as an attending physician at both the Lenox Hill Hospital and the NYU Hospitals Center. He is the author of over 15 peer-reviewed research publications as well as a textbook on Gastroenterology and Hepatology. Hepatitis B (HBV) and hepatitis C (HCV) infect nearly 2 billion people worldwide, but only recently have health officials in the developed world begun to recognize them as a serious health, political, and security challenge. If left unaddressed, by the end of the decade, these viruses could pose a greater danger than the HIV epidemic, not only as a public health problem, but for the alarming political and security threats they present. Although HBV and HCV are two very different viral entities, both can result in chronic liver infection with eventual progression to cirrhosis, liver cancer, and death. HBV and HCV are found worldwide, with the highest number of cases clustered in sub-Saharan Africa, Egypt, and East Asia. Therapy has improved dramatically during the past thirty years, but both cure and worldwide eradication remain elusive goals. I will briefly review the distinctive pathophysiology of both HBV and HCV before addressing the public health, political, and security implications of these two pandemics. Hepatitis B Virus HBV is a large and complex DNA-based virus first identified in 1967.1 More than 1.5 billion people are believed to be infected worldwide, and the highest levels are in sub-Saharan Africa and East Asia, where infection rates of up to 20 percent are common. Viral spread can be either vertical (mother-to-child) or horizontal (early child-to-child contact or through risky sexual behavior). Infection from improperly sterilized intravenous needles (such as among IV drug users) and contaminated blood or blood products account for a smaller but important percentage of cases worldwide. 1 Those who have a scientific background may wish to refer to the excellent review article focusing on the molecular biology of HBV by Donald Ganem and Alfred M. Prince, "Hepatitis B Virus Infection--Natural History and Clinical Consequences," New England Journal of Medicine 350 (March 2004): 1118-1129. Most people infected with HBV show no symptoms of the disease. Ten to twenty percent of cases progress insidiously over a twenty- to thirty-year span, leading to cirrhosis, liver cancer, and, ultimately, death. Detection requires expensive individual blood-screening studies as well as vigilant monitoring of the nation?s blood supply. Consequently, the likelihood that someone with the infection will be diagnosed and treated depends on both geography and socioeconomic status. Patients in developed countries with access to medical care usually are diagnosed earlier in the infectious cycle and are better candidates for costly but effective oral antiviral therapy. Those, however, in either underserved areas of the West or in the developing world usually get medical attention at a more advanced stage of the disease, by which time, even if available, antiviral therapy is less effective. At that point, treatment generally is palliative but, given the massive number of patients, even this is costly for those countries with meager healthcare infrastructures and budgets. Hepatitis C Virus HCV is simpler in molecular design but far deadlier than HBV. It is a close cousin of both the dengue and yellow fever viruses. HCV initially was characterized in 1989.2 As with HIV, the genetic material of HCV is RNA, and this results in a high degree of genetic instability, or mutability, rendering the virus not only more infective and adaptive but exceedingly difficult to treat. At least 250 million people worldwide are estimated to have contracted HCV. But, unlike HBV, the incidence of chronic infection with ineluctable cirrhosis, liver cancer, and death far exceeds 50 percent. HCV is spread horizontally, in most cases, through improperly sterilized needles used by IV drug users or as the result of a contaminated blood supply. Maternal transmission and sexual promiscuity are other important risk factors. Low socioeconomic status plays a role in the likelihood of infection, but experts don?t quite understand the link. On the basis of some very elegant epidemiological research, HCV is thought to have originated as a zoonotic (animal-to-human) mutation somewhere in the Pacific Theater during the Second World War. HCV insidiously infected large numbers of Japanese troops in China and Southeast Asia, and they eventually repatriated the virus.HCV reached the West in a circuitous fashion via infected French and American veterans of the Indochinese conflicts. Worldwide dissemination ensued from contamination of the blood supply in both Western Europe and the United States, and from a surge in infected IV needles shared by drug users or, in the case of at least one country, by lax sanitary controls during a massive national vaccination program. 2 For further in-depth details, please see G.M. Lauer and B.D. Walker, "Hepatitis C Virus Infection," New England Journal of Medicine 345 (July 2001): 41-52. Unlike HBV, treatment forHCVis not only costly but also most unsatisfactory at any stage of the disease. Overall therapeutic success is reported at 60 percent, but this is misleading, because those patients with the most-common variant of HCV have response rates of only 40 percent. The economic impact of the disease is extraordinary. Antiviral treatment for one year can exceed $25,000 per person in the United States, but the actual cost is far higher due to both frequent drug side effects and co-morbidity issues. Patients who do not respond to treatment may require liver transplantation. In the West, HCV infection is currently the most common reason for such transplants. In fact, nearly 6,000 were done in the United States in 2004, at an individual price conservatively estimated at $150,000. This excludes the requisite lifetime cost of the expensive, but mandatory, immunosuppressive drug therapy and specialized medical care that all transplant recipients must receive. Meanwhile, drug therapy and surgery are simply out of reach for most people in the developing world because of costs and lack of sophisticated medical support. Consequently, in underdeveloped nations, the average patient receives basic palliative therapy, languishes, wastes away, and dies. Public Health Challenges HBV and HCV present important challenges for the world healthcare establishment in detection, therapy, and containment strategies. Overall success varies from nation to nation, depending on the underlying quality and sophistication of a particular state?s medical infrastructure as well as its ability and political will to commit vast sums of money to combating the virus. Let us review each of the key challenges for fighting these infections. Detection Detection is based on blood screening. Given that both HBV and HCV are clinically undetectable for a major part of their destructive cycles, universal screening of all citizens, and especially of the blood supply, is the ultimate approach. However, realistically, only a small percentage of a nation?s population is likely to be effectively screened--even in the most motivated and sophisticated of societies. Therefore, public health planners have refined their strategy toward screening of all "high risk" individuals, but even so, their success in detecting the virus is limited. In the United States, officials estimate that they have identified only about 50 percent of those who have HCV, despite an aggressive national healthcare campaign. On the other hand, public health officials have succeeded brilliantly in detecting and eliminating both viruses from the blood supply in the developed world. What is more, despite the prohibitive cost of the viral detection kits, some inroads have been made in ensuring the safety of blood supplies in developing countries, also. Treatment Treatment strategies vary between the West and developing nations. Earlier viral detection in developed countries means that patients get treatment at an earlier stage of the infection, with good success at curing HBV and improving results in treating HCV. The story in the underdeveloped world is starkly different; typically, the average infected patient seeks treatment in the later and more debilitating stages of either disease. Successful outcomes are rare, and the cost of even simple supportive care is relatively quite high and strains alreadylimited healthcare budgets. In reaction, many poorer nations, encouraged by tactics learned during the HIV struggle, have begun to demand significant discounts, rebates, or frank allocations of medications from the major manufacturers, while requesting increases in dedicated foreign financial aid to combat the viruses. In addition, several Western and non-Western pharmaceutical concerns have begun to develop cheaper antiviral medications based on chemical synthetic processes with lower research and development costs. Containment Strategies Containment strategies of different degrees of efficacy and expense have been developed for both the West and the developed world. These include encouragement of universal condom use and prophylactic HBV vaccination. Condoms have gained broader social acceptance during the past two decades as an effective means of preventing the spread of HIV. Condoms also have been shown to help seriously curtail HBV transmission in several large epidemiological studies of gay men in New York City. We suspect that condom use should have a similarly salutary effect on curbing HCV transmission. But because sexual spread is a secondary route for this virus, the anticipated decrease may be minor. Vaccination indeed represents the optimal containment strategy and has been the most effective tool in eradicating other communicable diseases worldwide. Dr. Edward Jenner pioneered this technique in England at the end of the 18th century in an attempt to curb smallpox outbreaks. Vaccination, however, has several drawbacks: It is currently available only for HBV prevention (although an effective HCV vaccine is now in development and could be ready by the end of this decade). Furthermore, inadequately supervised vaccination programs using reusable but improperly sterilized needles or syringes may unintentionally lead to the massive spread of other communicable diseases. HBV vaccination has become mandatory for young school children in the West since the early 1990s, after it was shown to dramatically decrease viral incidence. The ultimate example is Taiwan, where, following a ten-year mandatory vaccination program for all children, the measurable incidence of HBV plummeted from more than 10 percent to less than 1 percent in this cohort.3 A universal HBV vaccination program for children in the United States, underwritten and signed into law by then-President Bill Clinton in 1993, will probably rank as one of his most important legislative contributions. The only flaw in universal vaccination programs is that the present HBV vaccine is thought to have an effective life of only five to ten years and, therefore, must be supplemented by regular booster shots. Although logistically possible in preschool and high school populations, regular booster programs for adults present a logistical nightmare. 3 For more details on this fascinating study, see M-H. Chang, C-J. Chen, M-S. Lai, et al., "Universal Hepatitis B Vaccination in Taiwan and the Incidence of Hepatocellular Carcinoma in Children," New England Journal of Medicine 336 (June 1997): 1855-1859. Meanwhile, developing countries have yet to benefit from this containment effort, because the vaccines are very expensive. Recently, several newer and less-costly alternatives have been developed with poorer nations in mind. Immunological relief for those countries may yet be in sight. It bears noting, also, that vaccination programs are not risk free. If administered improperly, a large program, however well intentioned, can actually serve as a vector of transmission for another infectious agent. Such is the case in Egypt, a nation that presents a cautionary tale of good intentions and poor execution that produced disastrous results. The parasite schistosomiasis, endemic to the entire Nile River Valley and first described during the time of the Pharaohs, has been a major cause of debilitating liver and bladder disease in large numbers of villagers in both Egypt and the Sudan. From 1958 through 1982, the Egyptian government undertook a massive anti-schistosomal campaign to eradicate this scourge. Tragically, the reusable needles and syringes used in the public health effort were improperly decontaminated, and many of the vaccines became co-infected with HCV. Subsequently, HCV infection rates surged in the vaccinated group, making Egypt?s infection rate the highest in the world: More than 20 percent of the Egyptian population has evidence of HCV infection. By comparison, in neighboring Sudan, which is just as hard hit by schistosomiasis but which did not pursue a campaign of mass vaccination, the rate of HCV is only 4 percent.4 4 For a thorough review of this distressing topic, see C. Frank, M.K. Mohamed, G.T. Strickland, et al., "The Role of Parenteral Anti-schistosomal Therapy in the Spread of Hepatitis C Virus in Egypt," Lancet 355 (March 2000): 887-889. Political and Security Issues There is no evidence that HBV and HCV have been used as weapons of bioterror. From the point of view of today?s terrorists, neither HBV nor HCV would be considered as prime bioterror agents, given their relatively slow spread, the twenty- to thirty-year effective lag time before the appearance of significant symptoms, and the relative invulnerability of most nations? blood supplies. Nevertheless, if left untreated, HBV and HCV pose significant longterm social and financial threats to political stability. HBV, for example, has already brought disaster, despair, and chaos to large swaths of sub-Saharan Africa, where billions of dollars were diverted from critical economic development efforts to fight these pandemics. If not checked, HBV and HCV risk imperiling the entire African continent. The West has successfully controlled both infections with only minimal financial, political, and security concerns. In contrast, the pandemics are likely to further enfeeble the economies of those nations least equipped to cope with the huge economic toll they could exact. Individual citizens of any nation typically look to their governments to ensure their healthcare security. That security hinges on the ability of the state to protect its citizens from contracting either virus, either through preventive programs, such as HBV vaccination, a secure blood supply, and condom campaigns, or by aggressively treating the disease with antiviral medications and liver transplantation. The wealthier the state and the more sophisticated the healthcare system, the more its citizens expect of their public health programs. To meet those expectations, governments are coming under increasing political pressure to redirect public spending into such preventive measures as education, research, and healthcare infrastructure. Such moves should include widespread HBV vaccination, condom usage, and safeguarding of the blood supply, because multiple epidemiological studies attest to their overall effectiveness and relative lack of expense. And, to a large degree, wealthier nations have done just that. In doing so, these societies have averted wholesale epidemics with the attendant societal and political disruption they threaten. Indeed, the eradication of both HBV and HCV from the Western world?s blood supply is to date, among their greatest public health successes. In the United States, the chances of contracting either HBV or HCV from contaminated blood have fallen to 1:63,000 and 1:130,000, respectively. Although impressive, these results pale in comparison to HIV prevention, where the rate of accidental contraction is 1:493,000. Clearly, further work, in the sense of even more-sensitive HBV and HCVblood detection technology, is needed, and this should be driven by enlightened public policy. HBV vaccination and universal condom usage in the West have also been effective and must be encouraged and expanded. But the prognosis for poorer nations, lacking both the financial base and medical infrastructure, in combating HBV and HCV is far less sanguine. These nations must make both prevention and containment strategies high priorities and summon the political will to raise public awareness of the diseases to the same degree they already have achieved in the HIV campaign. Implementing the same preventive measures already in place in the West is critical to the poorer states? future economic and political health. Failure to protect the population will result in skyrocketing rates of infection, with many of the same political and security repercussions many poorer nations already have suffered from the faster-acting HIV epidemic. As the pool of HBV-HCV-infected citizens widens, no segment of the population will be safe, with the disease spreading to include farmers and workers, as well as the middle and upper classes. With so many layers of society swept up by the pandemics, nations will be hobbled byanever-worseninglaborshortage,aswellasasteadydecline inindividualworkerproductivity.Lowerexportswilltriggeralossofforeign currency,andforeigninvestmentislikelytodryupaswell.Asaresult,already economicallyvulnerablenationswillseetheireconomiesdeteriorateeven morerapidly. Atthesametime,wideninginfectionratesamongtheranksofthemilitary, oftenthesocietalbackboneofadevelopingnation,alsoexacerbateinternal andinternationalsecurityconcerns.Theendresultforthestateisoverwhelming poverty,socialunrest,anddangerouspoliticalinstability.Inaddition, poorerstateswillhavetomakethehardbutrealisticchoicetoforegoexpensive antiviralandtransplantationtreatmentsandconcentrateonthebascpreventive programs.Ultimatelythough,giventhecostsofthesecheaperalternatives, itwillstillbeuptotheWestandthelargemultinationalpharmaceutical companiestodevelopandthendonatecheaperantiviraltherapiesortheir alternatives. BesidestheadditionaleconomiccoststotheWest,developednations, particularlytheUnitedStates,wouldfaceadditionalpoliticalandsecurity challenges,including o bordercontrol,tofendoffpossiblecarriersofthedisease; o unusualbusiness,foreignpolicy,andeconomicdevelopmentconstraintsin dealingwithnationswhereinfectionratesarehigh; o theexplodingHCVcrisisintheprisonpopulation; o thecontroversialquestionofestablishingarationalneedleexchangepro- gramforIVdrugusers. BorderControl Therelativelyporousbordersofourglobalizedsocietypromotethespreadof diseasethroughtourismandmigration.IftheHBV-HCVpandemicsexpand, theUnitedStatesandotherdevelopedcountriesmaycurtailorbaroutright tourismorimmigrationfromseverelyaffectedareas--withpoliticalfriction hereandabroadalikelyconsequence.Worseyet,iftourismdropsoffinnations hardhitbytheviruses,itwillbringthemevenmorecripplingeconomicdisruption andworseninginternationalrelations. Foreign Policy Implications ThespreadofthesediseasesalsohastroublingimplicationsforfutureU.S. foreignpolicy.Forexample,forfearofexposingU.S.troopsorgovernmentpersonnel toinfectioninendemicareas,thegovernmentmayavoidinvolvementincertain regionalconflictsorinpeacekeepingendeavors.TheIndochineselinktothes preadofHCVtotheWestisasoberingreminderoftherisk.Inaddition,businesses andmultinationalinvestorswillbemorelikelytocurtailtheiractivitiesin diseasehotspots, creating acatch-22forthesenationsbyfurtherweakeningtheirfinancial stabilityanddeprivingthemofcriticalcapitaltohelpfundtreatmentand containmentefforts. HCV in U.S. Prisons TheHCVepidemicintheU.S.prisonpopulationhasbecomebothasignificant domesticpoliticalissueandaninternaleconomicandsecuritythreat. Accordingtoa2003reportfromtheDepartmentofJustice,slightlymore than2millionpeopleareincarceratedinstateandfederalprisons.Many inmateshavehistoriesofIVdruguse,andseverallocalandnationwidestudies suggestthatatleast20percentofallprisonersareinfectedwithHCV. Manycasesgoundiagnosedanduntreated,andformaneverexpandingHCV reservoir,currentlyestimatedat400,000cases.Amplifyingthedanger, paroledprisonerscanunwittinglyinfectothersinthegeneralpopulation. Aggressiveeducationandearlytreatmentprogramsmustbeimplementedinour nation?sprisons,notonlytocontrolthisgrowinginternalsecuritythreat, butalsotoavoidtheprohibitivehealthcarecostsoflaterandmoreextensivetreatment. Needle Exchange Controversy Spreadofthediseasescouldbecheckedbycontroversial,buthighlyeffective, voluntaryfreeIVneedleexchangeprograms.Thesecarefullyadministeredprograms arewidelysuccessfulinwesternEuropeandarecreditedwithstemmingingreat partthefurtherspreadofbothHBVandHCVintheIVdrug- usingpopulation.Nevertheless,intheUnitedStatesin1988,Congressprohibited theadoptingofsimilarvoluntaryexchangeprograms.Theoriginalintentatthetime wastowaituntilafuturesecretaryofhealthandhumanservices(HHS)couldprove throughpilotstudiesthattheIVneedleexchangesforestalledthespreadof diseasewithoutencouragingdruguse.Compellingevidenceconfirmingthe beneficialeffectsoffreeneedleexchangeprogramswaspresentedtoHHS SecretaryDonnaShalalain1998,butthebanwasnotlifted, fordomesticpoliticalreasons.Itisimperativethatthepresent administrationreviewthispressingquestionandliftthebaninordertostem furtherinfectioninthissubpopulatin. Solutions AsuccessfulworldwideeradicationcampaignforbothHBVandHCVmustfocuson universalHBVvaccinationand,eventually,HCVvaccination,aswellasthe safeguardingoftheentireworld?sbloodsupply.TheWesthasmadeexcellent progressincurtailingandtreatingbothinfectionsinthegeneralpopulation. Nevertheless,glaringdeficienciesremain,suchasthesoaringrateofHCVinthe U.S.prisonpopulation,whichrequirespromptattention.Thefailuretoimplement thecongressionallysanctionednationalIVneedleexchangeprogramistragic, becauseithasbeenshowninmultiplestudiestocurtailthespreadofbothviruses intheIVdrug-usingpopulation.TheEuropeanexperienceinthisareahasbeenvery positive.Thisisanotherstickypoliticalsituation, but legislators must summon the needed courage to implement this important and rational program. The economic costs of not interceding, combined with the potential social chaos and political turmoil threatening portions of sub-Saharan Africa, should galvanize the West to aggressively pursue o medical funding for research and development of cheaper and longer-lasting HBV vaccines; o development of an HCV vaccine that would be available by the end of the decade and that would complement these other protective measures; o more-effective and less-costly anti-viral medications; o more-sensitive blood detection technology for the screening of both highrisk individuals and the blood supply. In the interim, the West should make a collective decision to fund HBV vaccination, blood supply screening, and HBV antiviral programs for those nations deemed at risk. It is ironic that HIV, a virus affecting 30 million people worldwide, commands far more attention, both politically and research dollar- wise, than both HBV and HCV, which together afflict nearly 2 billion people. President George W. Bush?s 2004 State of the Union speech pledge of $10 billion (USD) for treatment of HIV in Africa deserves both praise and support. But we should bear in mind that worldwide deaths as a result of HCV alone surpassed those of HIV in 2000 and are expected to rise sharply, according to an unclassified national intelligence estimate published during the last year of the Clinton administration.5 Efforts should be made to expand President Bush?s program in order to includeHBVand HCV. The European Economic Community and Japan should be encouraged to share the additional financial burden. 5 See: www.aegis.com/news/boc/2000/BC00093.html. The world has the potential to severely curtail or even rid itself outright of both HBV and HCV within the next generation. We find ourselves closer than we think to major breakthroughs in antiviral therapy, screening, and vaccination technology and, given our prior experience with smallpox and HIV, we possess the requisite logistical expertise to mount a successful worldwide viral eradication campaign. What we lack, however, is both a clear political focus and a collective political will, without which the consequences for everyone may be catastrophic. From checker at panix.com Thu Jul 21 20:52:14 2005 From: checker at panix.com (Premise Checker) Date: Thu, 21 Jul 2005 16:52:14 -0400 (EDT) Subject: [Paleopsych] Sigma XI: Storied Theory Message-ID: Storied Theory http://www.americanscientist.org/template/AssetDetail/assetid/44518?&print=yes&print=yes see full issue: July-August 2005 Volume: 93 Number: 4 Page: 308 DOI: 10.1511/2005.4.308 MARGINALIA Storied Theory Science and stories are not only compatible, they're inseparable, as shown by Einstein's classic 1905 paper on the photoelectric effect [24]Roald Hoffmann Science seems to be afraid of storytelling, perhaps because it associates narrative with long, untestable yarns. Stories are perceived as "just" literature. Worse, stories are not reducible to mathematics, so they are unlikely to impress our peers. This fear is misplaced for two reasons. First, in paradigmatic science, hypotheses have to be crafted. What are alternative hypotheses but competing narratives? Invent them as fancifully as you can. Sure, they ought to avoid explicit violations of reality (such as light acting like a particle when everyone knows it's a wave?), but censor those stories lightly. There is time for experiment--by you or others--to discover which story holds up better. The second reason not to fear a story is that human beings do science. A person must decide what molecule is made, what instrument built to measure what property. Yes, there are facts to begin with, facts to build on. But facts are mute. They generate neither the desire to understand, nor appeals for the patronage that science requires, nor the judgment to do A instead of B, nor the will to overcome a seemingly insuperable failure. Actions, small or large, are taken at a certain time by human beings--who are living out a story. Better Theory Through Stories One might think that experiments are more sympathetic than theories to storytelling, because an experiment has a natural chronology and an overcoming of obstacles (see my article, "Narrative," in the July-August 2000 American Scientist). However, I think that narrative is indivisibly fused with the theoretical enterprise, for several reasons. One, scientific theories are inherently explanatory. In mathematics it's fine to trace the consequences of changing assumptions just for the fun of it. In physics or chemistry, by contrast, one often constructs a theoretical framework to explain a strange experimental finding. In the act of explaining something, we shape a story. So C exists because A leads to B leads to C--and not D. Two, theory is inventive. This statement is certainly true for chemistry, which today is more about synthesis than analysis and more about creation than discovery. As Anne Poduska, a graduate student in my group, pointed out to me, "theory has a greater opportunity to be fanciful, because you can make up molecules that don't (yet) exist." Three, theory often provides a single account of how the world works--which is what a story is. In general, theoretical papers do not lay out several hypotheses. They take one and, using a set of mathematical mappings and proof techniques, trace out the consequences. Theories are world-making. Finally, comparing theory with experiment provides a natural ending. There is a beginning to any theory--some facts, some hypotheses. After setting the stage, developing the readers' interest, engaging them in the fundamental conflict, there is the moment of (often experimental) truth: Will it work? And if that test of truth is not at hand, perhaps the future holds it. The theorist who restates a problem without touching on an experimental result of some consequence, or who throws out too many unverifiable predictions, will lose credibility and, like a long-winded raconteur, the attention of his or her audience. Coming back to real ground after soaring on mathematical wings gives theory a narrative flow. Let me analyze a theoretical paper to show how this storytelling imperative works. Not just any paper, but a classic appropriate to the centennial of Albert Einstein's great 1905 papers. The Puzzle of Dwarvish Work Einstein's paper on the photoelectric effect, published that fecund year, was singled out by the 1921 Nobel Committee (late as usual, and perhaps still afraid of relativity) as the basis for their award. It is also the only one of the 1905 papers that Einstein himself deemed revolutionary. But when one reads the article, the photoelectric effect appears late, as a denouement; the paper begins elsewhere. The unwritten prologue is the contemporary interest in black-body radiation--the tendency of any object, no matter what its composition, to radiate light when it is heated. We see it in iron nestled in the forge, glowing red, then yellow, then white. The intensity of this emitted light varies with the color (wavelength). At low temperatures, bodies radiate in the infrared. As the temperature rises, the maximum intensity of the radiated light moves into the red, then extends through the spectrum to the ultraviolet. At high temperatures, objects radiate intense light across the visible spectrum--that's white heat. The intensity of radiated light diminishes in the extreme ultraviolet and far infrared (see right). Astronomers estimate the temperatures of stars from just such curves. The standard (and eminently successful) understanding of light in Einstein's day came from James Maxwell's electromagnetic theory. Coupled with thermodynamics and the kinetic theory of gases--a high expression of Newtonian mechanics--electromagnetic theory led to a "radiation law" that described how the intensity of light varied with wavelength at each temperature. The law fit the data--at long wavelengths. At short wavelengths, the equation derived from electromagnetic theory failed, in what became known as "the ultraviolet catastrophe." In 1900, Max Planck found an expression that fit over the entire range of observations. Planck further perceived that his accurate radiation law could be obtained only if the energies of the little bits of oscillating charge that caused the light (he called them "resonators") assumed discontinuous values. So the quantum was born. Planck had trouble believing that physics was, deep down, discontinuous. He spent many years searching for a way around what he discovered. But that is another story. How Einstein Tells It The photoelectric paper is modestly entitled, "On a Heuristic Point of View Concerning the Production and Transformation of Light." Einstein begins by stating the problem posed by the quantum hypothesis: He defines the resonators as bound electrons and takes us, with characteristic clarity, made possible by five years of experience with quanta, through Planck's derivation. He develops the characters in his tale--the radiation, Planck, his resonators, classical electromagnetic theory. Then Einstein does something new. He sets out to derive Planck's radiation law without any assumptions about how light is generated. How does he do that? By assigning an entropy (the measure of randomness, a concept already in wide use by then) to the light and relating that entropy to the density of the radiation. Einstein proves that the entropy of the light in the black body varies with volume just the way that entropy varies with volume for that standby of freshman chemistry, the ideal gas. This demonstration is direct. It's not Hemingway, but for scientific prose, really exciting. Einstein is taking us somewhere--we don't know where yet, but by the way he sets the scene, by his pace and conviction, we know something is going to happen. Pretty incredible. No resonators, just a functional analogy of atoms or molecules to light. Playing out the analogy, light of a given wavelength could be described as if its energy came in dollops of what Einstein called Rbv/N, and today we would call hv, a constant (h) times the light's frequency (v). But that's just a way of looking at things--it's not for nothing that Einstein put the word heuristic in the title. Or is it? When do stories become real? Back to the paper: Einstein has just rederived Planck's radiation law without resonators. Yet the discreteness of the light's energy, its quantization, is newly manifest in Einstein's work. There is no mistaking it. From this climax the paper cruises along another plateau, then swoops into a breathtaking shift of scene. Philipp Lenard had three years earlier observed "cathode rays," or beams of electrons, by shining light onto a metal. The phenomenon happened only when the frequency of that light exceeded a certain minimum; below that frequency (or above that wavelength)--nothing. After seeing the electrons, Lenard observed that their kinetic energy depended on the color of the light, their number on the intensity of the light. This phenomenon we now call the photoelectric effect. Aside from being today a primary source of information on molecules and surfaces, the effect is behind photoelectric cells opening elevator doors, and is used in solar cells and light-sensitive diodes. Back to 1905. Einstein just says: Let's assume light is quantized in units of hv, and that a "light quantum" (we would call it a photon today) gives up all its energy to a single electron. The electron needs a certain energy to leave the surface; if it has some left over, the extra contributes to its motion. Einstein calculates, in a couple of terse sentences, the energies involved and finds reasonable agreement with Lenard's measurements. With this and another calculation on the ionization of gases, he brings us down to experimental reality. Except reality is not down, it is evidence. Evidence that this story of light being quantized is not just any story. This one is worth telling to our great-grandchildren. Einstein's theory leaves us soaring, thinking what else this strange, discontinuous view of light might explain. Soon Bohr will use it to give us the first theory of an atom. This story is as exciting as Thomas Mann's 1902 Buddenbrooks, which Einstein might have been reading at the time. The photoelectric paper was submitted to Annalen der Physik (Annals of Physics) in March 1905. But Planck's quantum theory, and the nature of light, had been on Einstein's mind for quite a while. On April 30, 1901 he wrote to his future wife, Mileva Maric, "I came recently on the idea that when light is generated, perhaps there occurs a direct conversion of kinetic energy to light. Because of the parallelism: motional energy of the molecules--absolute temperature--spectrum (energy of radiation in equilibrium). Who knows when a tunnel will be dug through these hard mountains!" The Story Is in the Theory All theories tell a story. They have a beginning, in which people and ideas, models, molecules and governing equations take the stage. Their roles are defined; there is a puzzle to solve. Einstein sets his characters into motion so ingeniously, using entropy to tease out the parallels between moving molecules and the energy of light. The story develops; there are consequences of Einstein's approach. And at the end, his view of light as quantized and particular confronts the reality of the heretofore unexplained photoelectric effect. The postscripted future, of all else that can be understood and all new things that can be made, is implicit. Perceptive reader Anne Poduska notes that the photoelectric paper "is particularly interesting because of the layering of perspectives (similar to legends being passed from one generation to the next, with each storyteller adding their own flair/details)." Indeed, Einstein uses Planck's development of the radiation law even as the younger physicist claims he will do it differently. He parlays belief in the discreteness of molecules (some of his contemporaries still doubted their existence) into an argument, first cautious, then growing in strength, of the discreteness of light. A young man of 25, Einstein had mastered the old stories. In this paper he combined the ways others looked at the world, and trusting analogy as much as mathematics, made something new. Science is an inspired account of the struggle by human beings to understand the world. Changing it in the process. How could this be anything but a story? Acknowledgment Thanks to Anne Poduska for her careful reading and suggestions. Bibliography * Cassidy, D. 2005. Einstein and the quantum hypothesis. Annalen der Physik (series X) 14 (supplement):15-22. * Einstein, A. 1905. ?ber einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtpunkt. Annalen der Physik (series IV) 17:132-148. An English translation may be found in Ter Haar, D. 1967. The Old Quantum Theory. New York: Pergamon, and on the web at [30]http://lorentz.phl.jhu.edu/AnnusMirabilis/AeReserveArticles/ei ns_lq.pdf * Kuhn, T. S. 1978. Black-Body Theory and the Quantum Discontinuity, 1894-1912. New York: Oxford University Press. References Visible links 24. http://www.americanscientist.org/template/AuthorDetail/authorid/53 27. http://www.americanscientist.org/template/AssetDetail/assetid/44518?&print=yes&print=yes#44714 29. http://www.americanscientist.org/template/AssetDetail/assetid/44518?&print=yes&print=yes#44717 30. http://lorentz.phl.jhu.edu/AnnusMirabilis/AeReserveArticles/eins_lq.pdf From checker at panix.com Thu Jul 21 20:52:35 2005 From: checker at panix.com (Premise Checker) Date: Thu, 21 Jul 2005 16:52:35 -0400 (EDT) Subject: [Paleopsych] Wiki: List of U.S. Presidential religious affiliations Message-ID: List of U.S. Presidential religious affiliations - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/List_of_U.S._Presidential_religious_affiliations [Note this: "Franklin Steiner, in his book The Religious Beliefs Of Our Presidents, categorized Harrison as the first President who was unquestionably a communicant in an orthodox Church at the time he was elected.] This is a list of the religious affiliations of Presidents of the United States. The particular [3]religious affiliations of [4]U.S. Presidents can affect their electability, shape their visions of society and how they want to lead it, and shape their stances on policy matters. For example, a contributing factor to [5]Alfred E. Smith's defeat in the [6]presidential election of 1928 was his [7]Roman Catholic faith. In the 1960s, President [8]John F. Kennedy faced accusations that as a Catholic president he would do as [9]Pope John XXIII would tell him to do. [10]Thomas Jefferson, [11]Abraham Lincoln, and several other presidents were accused of being infidels during election campaigns -- and at other times. Throughout much of American history, the religion of past American presidents has been the subject of contentious debate. Some devout Americans have been disinclined to believe that there may have been [12]agnostic or even non-Christian presidents, especially amongst the [13]Founding Fathers of the United States. As a result, apocryphal stories of a religious nature have appeared over the years about particularly beloved presidents such as [14]Washington and Lincoln. On the other hand, [15]secular-minded Americans have sometimes downplayed the prominence that religion played in the private and political lives of the Founding Fathers. [16]Episcopalians are extraordinarily well represented among the presidents. This is in part because the [17]Episcopal Church was the [18]state religion in some states (such as Virginia) before their Constitutions were changed. Before the [19]American Revolution, the Episcopal Church was the American branch of the [20]Church of England. The first seven presidents listed below with Episcopalian affiliation were also the first seven from Virginia, and five of those were among the six presidents most closely identified with [21]Deism. Since there have seldom been any churches of Deism, strictly speaking Deist is not an affiliation in the same way Episcopalian is; it is included in the list below, however, to give a more complete view of the religious views of the presidents. The church closest to the [22]White House is also Episcopal, and has been attended at least once by nearly every president since [23]James Madison. St. John's Episcopal Church, just across Lafayette Square north of the White House, and built after the [24]War of 1812, is one of about five sometimes referred to as "the Church of the Presidents". Many people are interested not only in the religious affiliations of the presidents, but also in their inner beliefs. Some presidents, such as Madison and [25]Monroe, were extremely reluctant to discuss their own religious views at all. In general, it is difficult to define with any certainty the [26]faiths of presidents, because no one can truly be sure what relationship (if any) exists between another person and his deity, and because presidents, as public officials, have generally remained within the mainstream of American religious trends. With regard to [27]Christianity, distinguishing affiliation from belief can be somewhat complicated. At issue, to a certain extent, is "What counts as belonging to a church?" Must one be a [28]communicant to belong, or is [29]baptism or even simple attendance sufficient? Are [30]Unitarians, [31]Jehovah's Witnesses, and independents who generally hold [32]Jesus in high regard, but do not believe he was divine, to be counted as Christians or not? Numerous presidents changed their affiliations and/or their beliefs during their lives. George Washington, for example, gravitated from conventional Christianity as a youth towards [33]Deism as he aged. Contents * [34]1 List of Presidential religious affiliations/beliefs (by President) * [35]2 List of Presidential religious affiliations (by religion) * [36]3 External links * [37]4 Further reading * [38]5 Presidential trivia lists [[39]edit] List of Presidential religious affiliations/beliefs (by President) 1. [40]George Washington - [41]Deist; [42]Episcopalian (VA) + The religious views of George Washington are a matter of some controversy. There is strong evidence that he (like many of the Founding Fathers) was a [43]Deist - believing in [44]Divine Providence, but not believing in [45]divine intervention in the world after the initial design. Before the revolution, when the [46]Episcopal Church was still the [47]state religion in [48]Virginia, he served as a vestryman (lay officer) for his local church. He spoke often of the value of religion in general, and he sometimes accompanied his wife to Christian church services. However, there is no record of his ever becoming a communicant in any [49]Christian church and he would regularly leave services before [50]communion - with the other non-communicants. When Rev. Dr. James Abercrombie, rector of St. Peter's Episcopal Church in Philadelphia mentioned in a weekly sermon that those in elevated stations set an unhappy example by leaving at communion, Washington ceased attending at all on communion Sundays. Long after Washington died, asked about Washington's beliefs, Abercrombie replied: "Sir, Washington was a Deist." Various prayers said to have been composed by him in his later life are highly edited. He did not ask for any clergy on his deathbed, though one was available. His funeral services were those of the [51]Freemasons. 2. [52]John Adams - [53]Unitarian (MA) + The Adamses were originally members of [54]Congregational churches in [55]New England. Congregationalist churches became more diverse than other [56]Reformed churches such as [57]Presbyterians, where higher courts ensure doctrinal uniformity. Many New England congregations reacted against the [58]First Great Awakening and were influenced by [59]Arminianism, [60]Deism, [61]Unitarianism, and (later) [62]Transcendentalism - moving away from [63]Calvinism and its doctrine of [64]Predestination. By the [65]1750s several Congregational preachers were teaching the possibility of [66]universal salvation. The first Unitarian church in America was established in Boston in 1785. By 1800, all but one Congregationalist church in [67]Boston had Unitarian preachers teaching the [68]strict unity of God, the subordinate nature of Christ, and salvation by character. [69]Harvard University, founded by Congregationalists, itself became a source of Unitarian training. [70][1] 3. [71]Thomas Jefferson - [72]Deist; [73]Episcopalian (VA) + Though a vestryman (lay officer) of the Episcopal Church in Virginia, his beliefs were primarily [74]Deist. Unlike its effect on Congregational churches, Deism had little influence on Episcopal churches, which have a more hierarchical structure making them slower to modify their teachings. Of only three things Jefferson chose for his epitaph, one was the 1786 Statute of Virginia for Religious Freedom. Jefferson's views are considered very close to [75]Unitarian [76][2]. The [77]Famous UUs website says: [78][3] "Like many others of his time (he died just one year after the founding of institutional [79]Unitarianism in America), Jefferson was a Unitarian in theology, though not in church membership. He never joined a Unitarian congregation: there were none near his home in Virginia during his lifetime. He regularly attended [80]Joseph Priestley's Pennsylvania church when he was nearby, and said that Priestley's theology was his own, and there is no doubt Priestley should be identified as Unitarian. Jefferson remained a member of the [81]Episcopal congregation near his home, but removed himself from those available to become godparents, because he was not sufficiently in agreement with the [82]trinitarian theology. His work, The [83]Jefferson Bible, was Unitarian in theology..." + A remarkable quote from a letter Jefferson wrote to a Dr. Woods indicates that in fact he possessed considerable antipathy towards Christianity: "I have recently been examining all the known superstitions of the world, and do not find in our particular superstition one redeeming feature. They are all alike founded on fables and mythology." + See [84]Wikiquote and [85]Positive Atheism for many more similar quotes. 4. [86]James Madison - [87]Deist; [88]Episcopalian (VA) + In 1779 the [89]Virginia General Assembly deprived [90]Church of England ministers of tax support, but in 1784 [91]Patrick Henry sponsored a bill to again collect taxes to support churches in general. Madison's 1785 Memorial and Remonstrance was written in opposition to another bill to levy a general assessment for the support of religions. The assessment bill was tabled, and instead the legislature in 1786 passed Jefferson's Bill for Religious Freedom, first submitted in 1779. Virginia thereby became the first state to disestablish religion -- Rhode Island, Delaware, and Pennsylvania never having had an established religion. 5. [92]James Monroe - [93]Deist; [94]Episcopalian (VA) 6. [95]John Quincy Adams - [96]Unitarian (MA) [97][4] 7. [98]Andrew Jackson - [99]Presbyterian (NC/SC) + became a member about a year after retiring the presidency 8. [100]Martin Van Buren - [101]Dutch Reformed or no affiliation (NY) + Van Buren did not join any church in Washington, nor in his home town of [102]Kinderhook (village), New York. The sole original source to claim that he did join a church - in [103]Hudson, New York - is Vernon B. Hampton, in Religious Background of the White House (Boston: Christopher Publishing House, 1932). The basis for this claim has not been found. 9. [104]William Henry Harrison - [105]Episcopalian possibly (VA) + Harrison died just one month after his inauguration. After funeral, rector at St. John's Episcopal Church in Washington, DC said Harrison bought a Bible one day after his inauguration and planned to soon become a communicant. 10. [106]John Tyler - [107]Deist; [108]Episcopalian (VA) 11. [109]James K. Polk - [110]Presbyterian; later [111]Methodist (NC/TN) + Raised Presbyterian, Polk had never been baptized due to an early family argument with the local Presbyterian minister in rural North Carolina. Polk's father and grandfather were Deists, and the minister refused to baptize James unless his father affirmed Christianity, which he would not do. At age 38, Polk had a religious conversion to Methodism at a camp meeting, and thereafter he thought of himself as a Methodist. Out of respect for his mother and wife, however, he continued to attend Presbyterian services. Whenever his wife was out of town, or too ill to attend church, however, Polk worshipped at the local Methodist chapel. On his deathbed less than 4 months after leaving the Presidency, he summoned the man who had converted him years before, the Rev. John B. McFerrin, who then baptized Polk as a Methodist. 12. [112]Zachary Taylor - [113]Episcopalian (VA) 13. [114]Millard Fillmore - [115]Unitarian (NY) + In the early 1830s, he worked to overturn the New York test law that required all witnesses in New York courts to swear an oath affirming their belief in God and the hereafter. 14. [116]Franklin Pierce - [117]Episcopalian (NH) + 1850: unsuccessfully worked to abolish that portion of the New Hampshire Constitution which made the Protestant religion the official religion. + 1853 inauguration: affirmed instead of swearing the oath; did not kiss Bible + 1861: 4 years after retiring the presidency, he was baptized, confirmed, and became a regular communicant in St. Paul's Episcopal Church, in Concord, NH. 15. [118]James Buchanan - [119]Presbyterian (PA) + raised Presbyterian, he joined its church after he retired the presidency 16. [120]Abraham Lincoln - [121]Deist; no affiliation known (KY/IN/IL) + Life before the presidency o For much of his life, Lincoln was undoubtedly Deist (see [122][5], [123][6]). In his younger days he openly challenged orthodox religions, but as he matured and became a candidate for public office he kept his Deist views more to himself, and would sometimes attend Presbyterian services with his wife. He loved to read the Bible, and even quoted from it, but he almost never made reference to Jesus, and is not known to have ever indicated a belief in the divinity of Jesus. o Evidence against Lincoln's ever being Christian includes offerings from two of Lincoln's most intimate friends, [124]Ward Hill Lamon and [125]William H. Herndon. Both Herndon and Lamon published biographies of their former colleague after his assassination relating their personal recollections of him. Each denied Lincoln's adherence to Christianity and characterized his religious beliefs as deist or atheist. + Lincoln's religion at the time of his death is a matter about which there is more disagreement. A number of Christian pastors, writing months and even years after Lincoln's assassination, claimed to have witnessed a late-life conversion by Lincoln to protestant Christianity. Some pastors date a conversion following the death of his son Eddie in 1850, and some following the death of his son Willie in 1862, and some later than that. These accounts are hard to substantiate and historians consider most of them to be [126]apocryphal. o One such account is an entry in the memory book The Lincoln Memorial Album--Immortelles (edited by Osborn H. Oldroyd, 1882, New York: G.W. Carleton & Co., p. 366) attributed to An Illinois clergyman (unnamed) which reads "When I left Springfield I asked the people to pray for me. I was not a Christian. When I buried my son, the severest trial of my life, I was not a Christian. But when I went to Gettysburg and saw the graves of thousands of our soldiers, I then and there consecrated myself to Christ. Yes, I do love Jesus." Other entries in the memory book are attributed by name. See a discussion of this story in They Never Said It, by Paul F. Boller & John George, (Oxford Univ. Press, 1989, p. 91). o Rev. Dr. [127]Phineas D. Gurley, pastor of the New York Avenue Presbyterian church in Washington D.C., which Lincoln attended with his wife when he attended any church, never claimed a conversion. According to D. James Kennedy in his booklet, "What They Believed: The Faith of Washington, Jefferson, and Lincoln", "Dr. Gurley said that Lincoln had wanted to make a public profession of his faith on Easter Sunday morning. But then came Ford's Theater." (p. 59, Published by Coral Ridge Ministries, 2003) Though this is possible, we have no way of verifying the truth of the report. The chief evidence against it is that Dr. Gurley, so far as we know, never mentioned it publicly. The determination to join, if accurate, would have been extremely newsworthy. It would have been reasonable for Dr. Gurley to have mentioned it at the funeral in the White House, in which he delivered the sermon which has been preserved[128][7]. The only evidence we have is an affidavit signed more than sixty years later by Mrs. Sidney I. Lauck, then a very old woman. In her affidavit signed under oath in Essex County, New Jersey, [129]February 15, [130]1928, she said, "After Mr. Lincoln's death, Dr. Gurley told me that Mr. Lincoln had made all the necessary arrangements with him and the Session of the New York Avenue Presbyterian Church to be received into the membership of the said church, by confession of his faith in Christ, on the Easter Sunday following the Friday night when Mr. Lincoln was assassinated." Mrs. Lauck was, she said, about thirty years of age at the time of the assassination. 17. [131]Andrew Johnson - no affiliation (NC/TN) + Some sources refer to Johnson having Baptist parents. He accompanied his wife to Methodist services sometimes, belonged to no church himself, and sometimes attended Catholic services - remarking favorably there was no reserved seating. Accused of being an infidel, he replied: "As for my religion, it is the doctrine of the Bible, as taught and practiced by Jesus Christ." (See The Age of Hate, 1930, by G.F. Milton, p. 80.) 18. [132]Ulysses S. Grant - no affiliation known (OH) + Grant was never baptized into any church, though he accompanied his wife to Methodist services. Many sources list his religious affiliation as Methodist based on a Methodist minister's account of a deathbed conversion. He did leave a note for his wife in which he hoped to meet her again in a better world. 19. [133]Rutherford B. Hayes - no affiliation (OH) + In his [134]1890, [135]17 May diary entry, he states: "I am not a subscriber to any creed. I belong to no Church. But in a sense satisfactory to myself, and believed by me to be important, I try to be a Christian and to help do Christian work." (page 435) 20. [136]James Garfield - [137]Disciples of Christ (OH) + In his early adulthood, Garfield sometimes preached and held revival meetings. 21. [138]Chester A. Arthur - [139]Episcopalian (VT/NY) 22. [140]Grover Cleveland - [141]Presbyterian (NJ/NY) 23. [142]Benjamin Harrison - [143]Presbyterian (OH/IN) + Harrison became a church elder, and taught Sunday school + Franklin Steiner, in his book The Religious Beliefs Of Our Presidents[144][8], categorized Harrison as the first President who was unquestionably a communicant in an orthodox Church at the time he was elected 24. [145]Grover Cleveland - [146]Presbyterian (NJ/NY) + During his second (non-consecutive) term, Cleveland included mention of Jesus Christ in his Thanksgiving Proclamation, something no other President had ever done. 25. [147]William McKinley - [148]Methodist (OH) + McKinley believed the U.S. government had a duty to help spread Christianity and Western civilization to the rest of the world. 26. [149]Theodore Roosevelt - [150]Dutch Reformed (NY) + 1908: opposed putting [151]In God We Trust on coins as sacrilegious 27. [152]William Howard Taft - [153]Unitarian (OH) 28. [154]Woodrow Wilson - [155]Presbyterian (VA/GA/NJ) 29. [156]Warren G. Harding - [157]Baptist (OH) 30. [158]Calvin Coolidge - [159]Congregationalist (VT/MA) 31. [160]Herbert Hoover - [161]Quaker (IA/OR/CA) 32. [162]Franklin D. Roosevelt - [163]Episcopalian (NY) 33. [164]Harry S. Truman - [165]Baptist (MO) 34. [166]Dwight D. Eisenhower - [167]Jehovah's Witness; later [168]Presbyterian (TX/KS/PA) + Brought up Jehovah's Witness, Eisenhower abandoned that before joining the [169]United States Military Academy at [170]West Point, New York. (See [171][9], [172][10], and [173][11].) He was baptized, confirmed, and became a communicant in the Presbyterian church in a single ceremony [174]1953 [175]February 1, just weeks after his first inauguration. He is the only president known to be baptized, or to be confirmed, or to become a communicant while in office. Eisenhower was instrumental in the addition of the words "under God" to the [176]Pledge of Allegiance in [177]1954, and the [178]1956 adoption of "[179]In God We Trust" as the [180]motto of the USA, and its 1957 introduction on paper currency. The chapel at his presidential library is intentionally inter-denominational. 35. [181]John F. Kennedy - [182]Roman Catholic (MA) 36. [183]Lyndon Johnson - [184]Disciples of Christ (TX) 37. [185]Richard Nixon - raised [186]Quaker (CA) 38. [187]Gerald R. Ford - [188]Episcopalian (NE/MI) 39. [189]Jimmy Carter - [190]Baptist, [191]born again (GA) + In [192]2000, Carter left the [193]Southern Baptist Convention, disagreeing over the role of women in society. See [194][12] 40. [195]Ronald Reagan - [196]Disciples of Christ (IL/CA) 41. [197]George H. W. Bush - [198]Episcopalian (MA/CT/TX) 42. [199]Bill Clinton - [200]Baptist (AR) 43. [201]George W. Bush - raised [202]Episcopalian, at age 40 became [203]Methodist, [204]born again, religious [205]teetotaler (CT/TX) [[206]edit] List of Presidential religious affiliations (by religion) * [207]Baptist + [208]Warren Harding + [209]Harry Truman + [210]Jimmy Carter + [211]Bill Clinton ([212]Southern Baptist) * [213]Congregationalist + [214]Calvin Coolidge * [215]Deist + [216]George Washington + [217]Thomas Jefferson + [218]James Madison + [219]James Monroe + [220]John Tyler + [221]Abraham Lincoln * [222]Disciples of Christ + [223]James Garfield + [224]Lyndon Johnson + [225]Ronald Reagan * [226]Dutch Reformed + [227]Martin Van Buren + [228]Theodore Roosevelt * [229]Episcopalian - the first 7 listed below were all from Virginia, where the Episcopal Church was the state church until 1786. + [230]George Washington (primarily Deist) + [231]Thomas Jefferson (primarily Deist) + [232]James Madison (primarily Deist) + [233]James Monroe (primarily Deist) + [234]William Henry Harrison (planning on joining?) + [235]John Tyler (primarily Deist) + [236]Zachary Taylor (Deist?) + [237]Franklin Pierce + [238]Chester Arthur + [239]Franklin D. Roosevelt + [240]Gerald Ford + [241]George H. W. Bush * [242]Methodist + [243]James Polk (originally [244]Presbyterian) + [245]Ulysses Grant (also listed as none known) + [246]William McKinley + [247]George W. Bush * [248]Presbyterian + [249]Andrew Jackson + [250]James Polk (later [251]Methodist) + [252]James Buchanan + [253]Grover Cleveland + [254]Benjamin Harrison + [255]Woodrow Wilson + [256]Dwight D. Eisenhower (originally [257]Jehovah's Witnesses) * [258]Quaker + [259]Herbert Hoover + [260]Richard Nixon * [261]Roman Catholic + [262]John F. Kennedy * [263]Jehovah's Witnesses + [264]Dwight D. Eisenhower (later [265]Presbyterian) * [266]Unitarian - [267]Unitarian Universalism is the religion generally associated today with those whose ideology developed from [268]Deism. + [269]John Adams + [270]John Quincy Adams + [271]Millard Fillmore + [272]William Howard Taft * Presidents without affiliation + [273]Abraham Lincoln + [274]Andrew Johnson + [275]Ulysses Grant (also listed as Methodist) + [276]Rutherford Hayes [[277]edit] External links * [278]Adherents.com's list * [279]Abraham Lincoln was a Deist * [280]Excerpts from The Religious Beliefs of Our Presidents, 1936, by Franklin Steiner * [281]Six Historic Americans by John Remsburg, 1906, examines religious views of Paine, Jefferson, Washington, Franklin, Lincoln, & Grant * [282]U.S. Library of Congress site: James Hutson article, James Madison and the Social Utility of Religion [[283]edit] Further reading * Steiner, Franklin, The Religious Beliefs of Our Presidents: From Washington to F.D.R., Prometheus Books/The Freethought Library, July 1995. [284]ISBN 0879759755 Presidential trivia lists [286]U.S. Presidential lists [287]Doctrines | [288]Libraries | [289]Nicknames | [290]Pets | [291]Residences | [292]College education | [293]Date of birth | [294]Date of death | [295]Genealogical relationship | [296]Height order | [297]Longevity | [298]Military rank | [299]Military service | [300]Place of birth | [301]Place of primary affiliation | [302]Political affiliation | [303]Political occupation | [304]Previous occupation | [305]Religious affiliation | [306]Time in office | [307]Served one term | [308]Served two or more terms | [309]Vice Presidents by time in office References Visible links 1. http://www.gnu.org/copyleft/fdl.html 2. http://en.wikipedia.org/wiki/User_talk:166.84.1.1 3. http://en.wikipedia.org/wiki/Religion 4. http://en.wikipedia.org/wiki/President_of_the_United_States 5. http://en.wikipedia.org/wiki/Alfred_E._Smith 6. http://en.wikipedia.org/wiki/U.S._presidential_election%2C_1928 7. http://en.wikipedia.org/wiki/Catholicism 8. http://en.wikipedia.org/wiki/John_F._Kennedy 9. http://en.wikipedia.org/wiki/Pope_John_XXIII 10. http://en.wikipedia.org/wiki/Thomas_Jefferson 11. http://en.wikipedia.org/wiki/Abraham_Lincoln 12. http://en.wikipedia.org/wiki/Agnostic 13. http://en.wikipedia.org/wiki/Founding_Fathers_of_the_United_States 14. http://en.wikipedia.org/wiki/George_Washington 15. http://en.wikipedia.org/wiki/Secular 16. http://en.wikipedia.org/wiki/Episcopal 17. http://en.wikipedia.org/wiki/Episcopal_Church_in_the_United_States_of_America 18. http://en.wikipedia.org/wiki/State_religion 19. http://en.wikipedia.org/wiki/American_Revolution 20. http://en.wikipedia.org/wiki/Church_of_England 21. http://en.wikipedia.org/wiki/Deism 22. http://en.wikipedia.org/wiki/White_House 23. http://en.wikipedia.org/wiki/James_Madison 24. http://en.wikipedia.org/wiki/War_of_1812 25. http://en.wikipedia.org/wiki/James_Monroe 26. http://en.wikipedia.org/wiki/Faith 27. http://en.wikipedia.org/wiki/Christianity 28. http://en.wikipedia.org/wiki/Eucharist 29. http://en.wikipedia.org/wiki/Baptism 30. http://en.wikipedia.org/wiki/Unitarianism 31. http://en.wikipedia.org/wiki/Jehovah%27s_Witnesses 32. http://en.wikipedia.org/wiki/Jesus 33. http://en.wikipedia.org/wiki/Deism 34. http://en.wikipedia.org/wiki/List_of_U.S._Presidential_religious_affiliations#List_of_Presidential_religious_affiliations.2Fbeliefs_.28by_President.29 35. http://en.wikipedia.org/wiki/List_of_U.S._Presidential_religious_affiliations#List_of_Presidential_religious_affiliations_.28by_religion.29 36. http://en.wikipedia.org/wiki/List_of_U.S._Presidential_religious_affiliations#External_links 37. http://en.wikipedia.org/wiki/List_of_U.S._Presidential_religious_affiliations#Further_reading 38. http://en.wikipedia.org/wiki/List_of_U.S._Presidential_religious_affiliations#Presidential_trivia_lists 39. http://en.wikipedia.org/w/index.php?title=List_of_U.S._Presidential_religious_affiliations&action=edit§ion=1 40. http://en.wikipedia.org/wiki/George_Washington 41. http://en.wikipedia.org/wiki/Deism 42. http://en.wikipedia.org/wiki/Episcopal_Church_in_the_United_States_of_America 43. http://en.wikipedia.org/wiki/Deism 44. http://en.wikipedia.org/wiki/Divine_providence 45. http://en.wikipedia.org/wiki/Miracle 46. http://en.wikipedia.org/wiki/Episcopal_Church_in_the_United_States_of_America 47. http://en.wikipedia.org/wiki/State_religion 48. http://en.wikipedia.org/wiki/Virginia 49. http://en.wikipedia.org/wiki/Christianity 50. http://en.wikipedia.org/wiki/Eucharist 51. http://en.wikipedia.org/wiki/Freemasonry 52. http://en.wikipedia.org/wiki/John_Adams 53. http://en.wikipedia.org/wiki/Unitarianism 54. http://en.wikipedia.org/wiki/Congregational_church 55. http://en.wikipedia.org/wiki/New_England 56. http://en.wikipedia.org/wiki/Reformed_churches 57. http://en.wikipedia.org/wiki/Presbyterianism 58. http://en.wikipedia.org/wiki/First_Great_Awakening 59. http://en.wikipedia.org/wiki/Arminianism 60. http://en.wikipedia.org/wiki/Deism 61. http://en.wikipedia.org/wiki/Unitarianism 62. http://en.wikipedia.org/wiki/Transcendentalism 63. http://en.wikipedia.org/wiki/Calvinism 64. http://en.wikipedia.org/wiki/Predestination 65. http://en.wikipedia.org/wiki/1750s 66. http://en.wikipedia.org/wiki/Universalism 67. http://en.wikipedia.org/wiki/Boston 68. http://en.wikipedia.org/wiki/Nontrinitarianism 69. http://en.wikipedia.org/wiki/Harvard_University 70. http://www.uua.org/uuhs/duub/articles/johnadams.html 71. http://en.wikipedia.org/wiki/Thomas_Jefferson 72. http://en.wikipedia.org/wiki/Deism 73. http://en.wikipedia.org/wiki/Episcopal 74. http://en.wikipedia.org/wiki/Deism 75. http://en.wikipedia.org/wiki/Unitarian_Universalism 76. http://www.uua.org/uuhs/duub/articles/thomasjefferson.html 77. http://www.famousuus.com/ 78. http://www.famousuus.com/bios/thomas_jefferson.htm 79. http://en.wikipedia.org/wiki/Unitarianism 80. http://en.wikipedia.org/wiki/Joseph_Priestley 81. http://en.wikipedia.org/wiki/Episcopal 82. http://en.wikipedia.org/wiki/Trinitarianism 83. http://en.wikipedia.org/wiki/Jefferson_Bible 84. http://en.wikiquote.org/wiki/Thomas_Jefferson 85. http://www.positiveatheism.org/hist/quotes/jefferson.htm 86. http://en.wikipedia.org/wiki/James_Madison 87. http://en.wikipedia.org/wiki/Deism 88. http://en.wikipedia.org/wiki/Episcopal 89. http://en.wikipedia.org/wiki/Virginia_General_Assembly 90. http://en.wikipedia.org/wiki/Church_of_England 91. http://en.wikipedia.org/wiki/Patrick_Henry 92. http://en.wikipedia.org/wiki/James_Monroe 93. http://en.wikipedia.org/wiki/Deism 94. http://en.wikipedia.org/wiki/Episcopal 95. http://en.wikipedia.org/wiki/John_Quincy_Adams 96. http://en.wikipedia.org/wiki/Unitarianism 97. http://www.uua.org/uuhs/duub/articles/johnquincyadams.html 98. http://en.wikipedia.org/wiki/Andrew_Jackson 99. http://en.wikipedia.org/wiki/Presbyterianism 100. http://en.wikipedia.org/wiki/Martin_Van_Buren 101. http://en.wikipedia.org/wiki/Reformed_Church_in_America 102. http://en.wikipedia.org/wiki/Kinderhook_%28village%29%2C_New_York 103. http://en.wikipedia.org/wiki/Hudson%2C_New_York 104. http://en.wikipedia.org/wiki/William_Henry_Harrison 105. http://en.wikipedia.org/wiki/Episcopal 106. http://en.wikipedia.org/wiki/John_Tyler 107. http://en.wikipedia.org/wiki/Deism 108. http://en.wikipedia.org/wiki/Episcopal 109. http://en.wikipedia.org/wiki/James_K._Polk 110. http://en.wikipedia.org/wiki/Presbyterianism 111. http://en.wikipedia.org/wiki/Methodism 112. http://en.wikipedia.org/wiki/Zachary_Taylor 113. http://en.wikipedia.org/wiki/Episcopal 114. http://en.wikipedia.org/wiki/Millard_Fillmore 115. http://en.wikipedia.org/wiki/Unitarianism 116. http://en.wikipedia.org/wiki/Franklin_Pierce 117. http://en.wikipedia.org/wiki/Episcopal 118. http://en.wikipedia.org/wiki/James_Buchanan 119. http://en.wikipedia.org/wiki/Presbyterianism 120. http://en.wikipedia.org/wiki/Abraham_Lincoln 121. http://en.wikipedia.org/wiki/Deism 122. http://www.positiveatheism.org/hist/steinlinc.htm 123. http://www.infidels.org/library/historical/john_remsburg/six_historic_americans/chapter_5.html 124. http://en.wikipedia.org/wiki/Ward_Hill_Lamon 125. http://en.wikipedia.org/wiki/William_H._Herndon 126. http://en.wikipedia.org/wiki/Apocryphal 127. http://en.wikipedia.org/w/index.php?title=Phineas_D._Gurley&action=edit 128. http://showcase.netins.net/web/creative/lincoln/speeches/gurley.htm 129. http://en.wikipedia.org/wiki/February_15 130. http://en.wikipedia.org/wiki/1928 131. http://en.wikipedia.org/wiki/Andrew_Johnson 132. http://en.wikipedia.org/wiki/Ulysses_S._Grant 133. http://en.wikipedia.org/wiki/Rutherford_B._Hayes 134. http://en.wikipedia.org/wiki/1890 135. http://en.wikipedia.org/wiki/May_17 136. http://en.wikipedia.org/wiki/James_Garfield 137. http://en.wikipedia.org/wiki/Disciples_of_Christ 138. http://en.wikipedia.org/wiki/Chester_A._Arthur 139. http://en.wikipedia.org/wiki/Episcopal 140. http://en.wikipedia.org/wiki/Grover_Cleveland 141. http://en.wikipedia.org/wiki/Presbyterianism 142. http://en.wikipedia.org/wiki/Benjamin_Harrison 143. http://en.wikipedia.org/wiki/Presbyterianism 144. http://www.infidels.org/library/historical/franklin_steiner/presidents.html 145. http://en.wikipedia.org/wiki/Grover_Cleveland 146. http://en.wikipedia.org/wiki/Presbyterianism 147. http://en.wikipedia.org/wiki/William_McKinley 148. http://en.wikipedia.org/wiki/Methodism 149. http://en.wikipedia.org/wiki/Theodore_Roosevelt 150. http://en.wikipedia.org/wiki/Reformed_Church_in_America 151. http://en.wikipedia.org/wiki/In_God_We_Trust 152. http://en.wikipedia.org/wiki/William_Howard_Taft 153. http://en.wikipedia.org/wiki/Unitarianism 154. http://en.wikipedia.org/wiki/Woodrow_Wilson 155. http://en.wikipedia.org/wiki/Presbyterianism 156. http://en.wikipedia.org/wiki/Warren_G._Harding 157. http://en.wikipedia.org/wiki/Baptist 158. http://en.wikipedia.org/wiki/Calvin_Coolidge 159. http://en.wikipedia.org/wiki/Congregationalist_church 160. http://en.wikipedia.org/wiki/Herbert_Hoover 161. http://en.wikipedia.org/wiki/Religious_Society_of_Friends 162. http://en.wikipedia.org/wiki/Franklin_D._Roosevelt 163. http://en.wikipedia.org/wiki/Episcopal 164. http://en.wikipedia.org/wiki/Harry_S._Truman 165. http://en.wikipedia.org/wiki/Baptist 166. http://en.wikipedia.org/wiki/Dwight_D._Eisenhower 167. http://en.wikipedia.org/wiki/Jehovah%27s_Witness 168. http://en.wikipedia.org/wiki/Presbyterianism 169. http://en.wikipedia.org/wiki/United_States_Military_Academy 170. http://en.wikipedia.org/wiki/West_Point%2C_New_York 171. http://www.adherents.com/adh_presidents.html 172. http://www.premier1.net/~raines/eisenhower.html 173. http://www.adherents.com/largecom/fam_jw.html 174. http://en.wikipedia.org/wiki/1953 175. http://en.wikipedia.org/wiki/February_1 176. http://en.wikipedia.org/wiki/Pledge_of_Allegiance 177. http://en.wikipedia.org/wiki/1954 178. http://en.wikipedia.org/wiki/1956 179. http://en.wikipedia.org/wiki/In_God_We_Trust 180. http://en.wikipedia.org/wiki/Motto 181. http://en.wikipedia.org/wiki/John_F._Kennedy 182. http://en.wikipedia.org/wiki/Catholicism 183. http://en.wikipedia.org/wiki/Lyndon_Johnson 184. http://en.wikipedia.org/wiki/Disciples_of_Christ 185. http://en.wikipedia.org/wiki/Richard_Nixon 186. http://en.wikipedia.org/wiki/Religious_Society_of_Friends 187. http://en.wikipedia.org/wiki/Gerald_R._Ford 188. http://en.wikipedia.org/wiki/Episcopal 189. http://en.wikipedia.org/wiki/Jimmy_Carter 190. http://en.wikipedia.org/wiki/Baptist 191. http://en.wikipedia.org/wiki/Born_again 192. http://en.wikipedia.org/wiki/2000 193. http://en.wikipedia.org/wiki/Southern_Baptist_Convention 194. http://news.bbc.co.uk/2/hi/americas/982650.stm 195. http://en.wikipedia.org/wiki/Ronald_Reagan 196. http://en.wikipedia.org/wiki/Disciples_of_Christ 197. http://en.wikipedia.org/wiki/George_H._W._Bush 198. http://en.wikipedia.org/wiki/Episcopal 199. http://en.wikipedia.org/wiki/Bill_Clinton 200. http://en.wikipedia.org/wiki/Baptist 201. http://en.wikipedia.org/wiki/George_W._Bush 202. http://en.wikipedia.org/wiki/Episcopal 203. http://en.wikipedia.org/wiki/Methodism 204. http://en.wikipedia.org/wiki/Born_again 205. http://en.wikipedia.org/wiki/Teetotalism 206. http://en.wikipedia.org/w/index.php?title=List_of_U.S._Presidential_religious_affiliations&action=edit§ion=2 207. http://en.wikipedia.org/wiki/Baptist 208. http://en.wikipedia.org/wiki/Warren_Harding 209. http://en.wikipedia.org/wiki/Harry_Truman 210. http://en.wikipedia.org/wiki/Jimmy_Carter 211. http://en.wikipedia.org/wiki/Bill_Clinton 212. http://en.wikipedia.org/wiki/Southern_Baptist 213. http://en.wikipedia.org/wiki/Congregational_church 214. http://en.wikipedia.org/wiki/Calvin_Coolidge 215. http://en.wikipedia.org/wiki/Deist 216. http://en.wikipedia.org/wiki/George_Washington 217. http://en.wikipedia.org/wiki/Thomas_Jefferson 218. http://en.wikipedia.org/wiki/James_Madison 219. http://en.wikipedia.org/wiki/James_Monroe 220. http://en.wikipedia.org/wiki/John_Tyler 221. http://en.wikipedia.org/wiki/Abraham_Lincoln 222. http://en.wikipedia.org/wiki/Disciples_of_Christ 223. http://en.wikipedia.org/wiki/James_Garfield 224. http://en.wikipedia.org/wiki/Lyndon_Johnson 225. http://en.wikipedia.org/wiki/Ronald_Reagan 226. http://en.wikipedia.org/wiki/Reformed_Church_in_America 227. http://en.wikipedia.org/wiki/Martin_Van_Buren 228. http://en.wikipedia.org/wiki/Theodore_Roosevelt 229. http://en.wikipedia.org/wiki/Episcopal 230. http://en.wikipedia.org/wiki/George_Washington 231. http://en.wikipedia.org/wiki/Thomas_Jefferson 232. http://en.wikipedia.org/wiki/James_Madison 233. http://en.wikipedia.org/wiki/James_Monroe 234. http://en.wikipedia.org/wiki/William_Henry_Harrison 235. http://en.wikipedia.org/wiki/John_Tyler 236. http://en.wikipedia.org/wiki/Zachary_Taylor 237. http://en.wikipedia.org/wiki/Franklin_Pierce 238. http://en.wikipedia.org/wiki/Chester_Arthur 239. http://en.wikipedia.org/wiki/Franklin_D._Roosevelt 240. http://en.wikipedia.org/wiki/Gerald_Ford 241. http://en.wikipedia.org/wiki/George_H._W._Bush 242. http://en.wikipedia.org/wiki/Methodism 243. http://en.wikipedia.org/wiki/James_Polk 244. http://en.wikipedia.org/wiki/Presbyterian 245. http://en.wikipedia.org/wiki/Ulysses_Grant 246. http://en.wikipedia.org/wiki/William_McKinley 247. http://en.wikipedia.org/wiki/George_W._Bush 248. http://en.wikipedia.org/wiki/Presbyterianism 249. http://en.wikipedia.org/wiki/Andrew_Jackson 250. http://en.wikipedia.org/wiki/James_Polk 251. http://en.wikipedia.org/wiki/Methodist 252. http://en.wikipedia.org/wiki/James_Buchanan 253. http://en.wikipedia.org/wiki/Grover_Cleveland 254. http://en.wikipedia.org/wiki/Benjamin_Harrison 255. http://en.wikipedia.org/wiki/Woodrow_Wilson 256. http://en.wikipedia.org/wiki/Dwight_D._Eisenhower 257. http://en.wikipedia.org/wiki/Jehovah%27s_Witnesses 258. http://en.wikipedia.org/wiki/Religious_Society_of_Friends 259. http://en.wikipedia.org/wiki/Herbert_Hoover 260. http://en.wikipedia.org/wiki/Richard_Nixon 261. http://en.wikipedia.org/wiki/Catholicism 262. http://en.wikipedia.org/wiki/John_F._Kennedy 263. http://en.wikipedia.org/wiki/Jehovah%27s_Witnesses 264. http://en.wikipedia.org/wiki/Dwight_D._Eisenhower 265. http://en.wikipedia.org/wiki/Presbyterian 266. http://en.wikipedia.org/wiki/Unitarianism 267. http://en.wikipedia.org/wiki/Unitarian_Universalism 268. http://en.wikipedia.org/wiki/Deism 269. http://en.wikipedia.org/wiki/John_Adams 270. http://en.wikipedia.org/wiki/John_Quincy_Adams 271. http://en.wikipedia.org/wiki/Millard_Fillmore 272. http://en.wikipedia.org/wiki/William_Howard_Taft 273. http://en.wikipedia.org/wiki/Abraham_Lincoln 274. http://en.wikipedia.org/wiki/Andrew_Johnson 275. http://en.wikipedia.org/wiki/Ulysses_Grant 276. http://en.wikipedia.org/wiki/Rutherford_Hayes 277. http://en.wikipedia.org/w/index.php?title=List_of_U.S._Presidential_religious_affiliations&action=edit§ion=3 278. http://www.adherents.com/adh_presidents.html 279. http://www.positiveatheism.org/hist/steinlinc.htm 280. http://www.infidels.org/library/historical/franklin_steiner/presidents.html 281. http://www.infidels.org/library/historical/john_remsburg/six_historic_americans/index.shtml 282. http://www.loc.gov/loc/madison/hutson-paper.html 283. http://en.wikipedia.org/w/index.php?title=List_of_U.S._Presidential_religious_affiliations&action=edit§ion=4 284. http://en.wikipedia.org/w/index.php?title=Special:Booksources&isbn=0879759755 285. http://en.wikipedia.org/w/index.php?title=List_of_U.S._Presidential_religious_affiliations&action=edit§ion=5 286. http://en.wikipedia.org/wiki/President_of_the_United_States 287. http://en.wikipedia.org/wiki/List_of_U.S._Presidential_doctrines 288. http://en.wikipedia.org/wiki/List_of_U.S._Presidential_libraries 289. http://en.wikipedia.org/wiki/List_of_U.S._Presidential_nicknames 290. http://en.wikipedia.org/wiki/List_of_U.S._Presidential_pets 291. http://en.wikipedia.org/wiki/List_of_U.S._Presidential_residences 292. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_college_education 293. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_date_of_birth 294. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_date_of_death 295. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_genealogical_relationship 296. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_height_order 297. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_longevity 298. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_military_rank 299. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_military_service 300. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_place_of_birth 301. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_place_of_primary_affiliation 302. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_political_affiliation 303. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_political_occupation 304. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_previous_occupation 305. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_religious_affiliation 306. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_by_time_in_office 307. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_who_have_served_one_term 308. http://en.wikipedia.org/wiki/List_of_U.S._Presidents_who_have_served_two_or_more_terms 309. http://en.wikipedia.org/wiki/List_of_U.S._Vice_Presidents_by_time_in_office From checker at panix.com Thu Jul 21 20:58:02 2005 From: checker at panix.com (Premise Checker) Date: Thu, 21 Jul 2005 16:58:02 -0400 (EDT) Subject: [Paleopsych] Discovery: Human Brain's 'Mastermind' Located Message-ID: Human Brain's 'Mastermind' Located http://dsc.discovery.com/news/briefs/20050718/multitasking_print.html By Jennifer Viegas, Discovery News July 20, 2005 -- Humans attempt to do many things at the same time, such as driving and chatting on the phone, or working and listening to music, and now research suggests why such multitasking may be possible: the brain appears to have its own control center. Studies indicate that the physical "center" of the brain is located in the prefrontal cortex, on the left-hand front side, just above the temple. This is the first time that a "mastermind," which could control both visual and auditory activity, has been identified. Before the new research, most scientists thought the brain processed sight and sound in different areas. Now it is believed that sight and sound influence each other. Humans attempt to do many things at the same time, such as driving and chatting on the phone, or working and listening to music, and now research suggests why such multitasking may be possible: the brain appears to have its own control center. Studies indicate that the physical "center" of the brain is located in the prefrontal cortex, on the left-hand front side, just above the temple. This is the first time that a "mastermind," which could control both visual and auditory activity, has been identified. Before the new research, most scientists thought the brain processed sight and sound in different areas. Now it is believed that sight and sound influence each other. "Many others have studied how matched audio-visual events, such as watching lips move and hearing speech sounds, are processed in the brain, but we wanted to draw attention to all of the audio-visual events humans are exposed to that are completely unrelated, like driving and talking on a cell phone or cleaning your apartment and listening to music," said Jennifer Johnson, lead author of the study and a researcher in the experimental psychology program at McGill University. Johnson and colleague Robert Zatorre had test subjects listen to short, novel melodies and look at changing geometric shapes on a computer screen, both separately and at the same time. When multitasking, participants were asked to focus more on the music or the shapes at various periods. Magnetic Resonance Imaging (MRI) recorded what happened in their brains. The study's results were presented at the annual meeting of the Organization for Human Brain Mapping in Toronto. When someone only listened to music, the auditory part of the brain, located just over the ears, activated. Visual stimulation by itself activated the visual area of the brain, toward the back of the head. Multitasking brought in the "mastermind" area that seemed to divide and control activity between the visual and auditory parts of the brain. The researchers, however, said one activity usually takes precedence, which could explain why students who forever listen to loud music do not study as well as those who work in silence, and why drivers who chat on the phone often make errors in judgment. "One of the events is distracting from the other," Johnson told Discovery News. "We see in our study how focusing attention on one sense causes increased activity in that sensory area of the brain, but we also see how ignoring the other sense causes decreased activity in the other sensory area of the brain. This is likely why listening to raucous music or talking on the cell phone can lead to decrements in performance for the other tasks of studying and driving." Another recent study on brain organization, authored by Michael Fox and colleagues at Washington University, supports the importance of the frontal cortex region in multitasking. Fox and his team also noted that whenever an activity demands more attention, such as when a cell phone talking driver realizes he or she is about to run a red light, the driver's brain will then focus more on the driving than on the talking. From checker at panix.com Thu Jul 21 20:58:12 2005 From: checker at panix.com (Premise Checker) Date: Thu, 21 Jul 2005 16:58:12 -0400 (EDT) Subject: [Paleopsych] NYT: Brain-Dead Woman's Fetus Passes Milestone in Development Message-ID: Brain-Dead Woman's Fetus Passes Milestone in Development http://www.nytimes.com/2005/07/21/health/21fetus.html [What medical knowledge will be gained from this? Is that knoweldge worth the price?] By THE ASSOCIATED PRESS RICHMOND, Va., July 20 (AP) - A brain-dead pregnant woman on life support has passed the milestone in her pregnancy where doctors believe the baby could realistically survive outside the womb, giving her family renewed hope. The woman, Susan Torres, 26, lost consciousness from a stroke on May 7 after aggressive melanoma spread to her brain. Her husband, Jason Torres, said doctors told him his wife's brain functions had stopped. Her fetus recently passed the 24th week of development, the earliest point at which doctors believe the baby would have a reasonable chance to survive, said her brother-in-law, Justin Torres. "The situation is pretty stable," said the brother-in-law, who is serving as the family's spokesman. "Susan, we have said from the beginning, is the toughest person in that I.C.U. room." He said that a sonogram suggested that the baby is a girl, and that Cecilia was one possible name the couple had discussed before the stroke. A Web site set up by the family - [3]www.susantorresfund.org - has helped raise about $400,000 in donations to pay the mounting medical bills, Justin Torres said. Jason Torres quit his job as a printing salesman to be by his wife's side, and the family must pay tens of thousands of dollars each week that insurance does not cover, the family says. Donations have poured in from around the world: Germany, Britain, Ireland, Japan, even a check with no note from a soldier in Iraq. On Monday, the family received a hand-knit baby blanket from a woman in Pennsylvania who was on a tight income but wanted to help. Jason Torres spends every night sleeping in a reclining chair next to his wife's bed at Virginia Hospital Center in Arlington, about 100 miles north of Richmond. The hospital has declined to comment on the case. The couple's 2-year-old son, Peter, is staying with grandparents. He has not seen his mother, a researcher at the National Institutes of Health, since her collapse. If possible, the doctors hope to hold off on delivering the child until 32 weeks' gestation, Justin Torres said. A full-term pregnancy is about 40 weeks. "She would have wanted us to fight for this baby - there's no doubt in our minds," he said. Ms. Torres's melanoma has spread to lymph nodes and taken over her vital organs, but they continue to function. There is a chance the [4]cancer could spread to the placenta, but so far it has been spared, the brother-in-law said. Extra precautions, including limiting the number of visitors, have recently been taken to help her avoid infections. Doctors have delayed giving the family a prognosis because the situation is so rare, said Mr. Torres, who said he believed his sister-in-law was likely to hang on for a few more weeks. Since 1979, there have been at least a dozen similar cases published in English medical literature, said Dr. Winston Campbell, director of maternal-fetal medicine at the University of Connecticut Health Center, which conducted research on the topic. The family received an unexpected sliver of joy on June 21, when Jason Torres felt his baby kick for the first time. "It was a very, very nice reminder of what this is all about, and very heartening to us to know that we're making progress and that we're getting closer and closer," Justin Torres said. "That was a very good day for everyone." References 3. http://www.susantorresfund.org/ 4. http://topics.nytimes.com/top/news/health/diseasesconditionsandhealthtopics/cancer/index.html?inline=nyt-classifier From checker at panix.com Thu Jul 21 20:58:21 2005 From: checker at panix.com (Premise Checker) Date: Thu, 21 Jul 2005 16:58:21 -0400 (EDT) Subject: [Paleopsych] NYT: 3 Biologists Question Evidence in Sighting of Rare Woodpecker Message-ID: 3 Biologists Question Evidence in Sighting of Rare Woodpecker http://www.nytimes.com/2005/07/21/science/21bird.html By [3]ANDREW C. REVKIN Three biologists are questioning the evidence used by a team of bird experts who made the electrifying claim in April that they had sighted an ivory-billed woodpecker, a bird presumed to have vanished from the United States more than 60 years ago, in the swampy forests of southeast Arkansas. If the challenge holds up, it would undermine not only a scientific triumph - the rediscovery of a resplendent bird that had been exhaustively sought for years - but also significant new conservation expenditures in the region. The paper questioning the discovery has been submitted to a peer-reviewed journal, which could post the analysis online within a few weeks. But the paper will be accompanied by a fierce rebuttal by the team that announced the discovery, and a response to that rebuttal by the challengers. The expected publication of the paper and the rebuttal was confirmed in interviews and e-mail exchanges with two authors of the challenge, Richard O. Prum and Mark B. Robbins, ornithologists at Yale and the University of Kansas, as well as with two members of the team that reported finding the woodpecker. The third author of the new paper is Jerome A. Jackson, a zoologist at Florida Gulf Coast University and the author of the book, "In Search of the Ivory-Billed Woodpecker," published in 2004. "In my opinion," Mr. Jackson wrote in an e-mail message on Wednesday, "the data presented thus far do no more than suggest the possibility of the presence of an ivory-billed woodpecker. I am most certainly not saying that ivory-billed woodpeckers are not out there. I truly hope that the birds do exist in Arkansas or elsewhere and have been championing this idea for a long time." Both groups of scientists declined to name the journal or to discuss the details of the challenge and the response until they were published. But they made it clear that the debate revolves around [4]four seconds of fuzzy videotape that, by chance, captured a bird with sweeping white-and-black wings as it darted from its perch on the far side of a tupelo tree in April 2004 and flicked over swampy waters before vanishing in the trees 11 wing beats later. That video clip was just one piece in a pile of drawings, recordings and other evidence collected in more than a year of searching and deploying cameras and listening devices across the vast swampy reaches of the Cache River National Wildlife Refuge. Altogether, the original research team, led by scientists from Cornell University and the Nature Conservancy, compiled seven sightings, including the video, as well as recordings of a "double knock" sound typical of the ivory-billed bird. But only the video was potentially solid enough to confirm for the wider ornithological community the existence of the bird, the authors said in various statements at the time. Everyone agrees that the bird that appears on the tape is either an ivory-billed woodpecker or a pileated woodpecker, a slightly smaller bird that is relatively common. Both species have a mix of white and black plumage. However, the ivory-billed woodpecker has a white trailing edge to its wings while the pileated woodpecker has a black trailing edge. The team that conducted the original search for the bird ran extensive tests, including recreating the scene captured in video using flapping, hand-held models of the two types of woodpecker. They concluded that the plumage patterns seen in the grainy image could only be that of the ivory-billed woodpecker. The authors of the new paper disagree. Only extended scientific discussion - or new pictures of the bird from additional searches - will determine whose view will prevail. Another intensive scientific search of the region is scheduled to begin in November, Cornell officials said. "The people who originally announced this thoroughly believe they got an ivory-billed woodpecker," Dr. Robbins said in an interview. Determining if a species has crossed the threshold of extinction often requires decades of observation to ensure that no stray individuals have found a reclusive hideaway. Supposedly extinct species have been rediscovered with some frequency over the last century. One famed example is the coelacanth, a fish known only from fossils for generations but then caught by African anglers. In the case of the ivory-billed woodpecker, a magnificent bird with a 30-inch wingspan and a red crest, determining that it has not become extinct has proved equally daunting. Individual birds were widely dispersed, and the woodpecker shared habits and habitat with the pileated woodpecker. Van Remsen of Louisiana State University, an expert on the woodpecker and a member of the team that reported finding the ivory-billed species, said he remained confident of the discovery. "We can counter everything," he said. "We stick to our guns." The announcement of the bird's apparent discovery came on April 28, when the scientists' findings were published in the online version of the journal Science. The announcement thrilled conservationists, who saw the bird as the perfect symbol around which to build an invigorated protection plan for woodland habitat in the Southeast, which harbors a rich array of wildlife and plants. The Bush administration used the reported sightings in Arkansas to promote its "cooperative conservation" philosophy. The day the rediscovery was publicized, the administration announced a variety of initiatives, including a plan to pay more than $13 million to landowners within the region's floodplains who plant and maintain forests. John W. Fitzpatrick, the co-leader of the search for the bird and director of the Cornell University Laboratory of Ornithology, said it was normal for scientists to disagree about evidence of this sort, especially because in this case the video in question was "pretty crummy." But he said that extensive analysis was done and redone to eliminate the possibility that the bird was a pileated woodpecker. Dr. Fitzpatrick added that there was "significant additional evidence right now" that would be published in coming months. He declined to comment on the challengers' assertions, saying any discussion could jeopardize publication of the exchange of papers on the video. References 3. http://query.nytimes.com/search/query?ppds=bylL&v1=ANDREW%20C.%20REVKIN&fdq=19960101&td=sysdate&sort=newest&ac=ANDREW%20C.%20REVKIN&inline=nyt-per 4. http://www.ivorybill.org/video.html From ljohnson at solution-consulting.com Fri Jul 22 01:51:23 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Thu, 21 Jul 2005 19:51:23 -0600 Subject: [Paleopsych] NYT: Brain-Dead Woman's Fetus Passes Milestone in Development In-Reply-To: References: Message-ID: <42E0511B.3010500@solution-consulting.com> Frank, you ask touching questions "[What medical knowledge will be gained from this? Is that knoweldge worth the price?]". Thank you for raising them. Embedded is the humanistic assumption that life has to be intelligent and aware in order to be worth living. Humanists seem to have a horror of living in a coma-like state. The religious apologist would reply that it is not proper for us to judge the value of a life, if we do not wish to be judged. It doesn't matter, says the apologist, if important if knowledge is gained, and the price of a healthy human baby is of infinite value. The worth is there. One might also ask for reflection of the slippery slope about the issue of which life is worth living. Eugenics and "mercy" killing (really, killing those who are inconvenient for us) lurk. My cousin was born with Down Syndrome 50 years ago and my aunt and uncle were told to put him away and forget about him. His life, the doctors said, wasn't worth living. Instead they brought him home and nurtured him. When they tried to enroll him in school, he was refused, so my uncle ran for the local school board, became president, and began one of the nation's first programs to mainstream retarded children. Today my cousin is married to a girl with down syndrome, he works at a job he enjoys, and the two of them have a happy life. He knows more about the NFL than I do, and can play the piano (not well, mind you). Because of him, many retarded children in my little home town were educated to the best of their capacity, long before your agency became involved. So one never knows where the ripples of a life well nurtured will end. Lynn Premise Checker wrote: > Brain-Dead Woman's Fetus Passes Milestone in Development > http://www.nytimes.com/2005/07/21/health/21fetus.html > > [What medical knowledge will be gained from this? Is that knoweldge > worth the price?] > > By THE ASSOCIATED PRESS > > RICHMOND, Va., July 20 (AP) - A brain-dead pregnant woman on life > support has passed the milestone in her pregnancy where doctors > believe the baby could realistically survive outside the womb, giving > her family renewed hope. > > -snip- From HowlBloom at aol.com Fri Jul 22 05:53:29 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Fri, 22 Jul 2005 01:53:29 EDT Subject: [Paleopsych] Eshel, Pavel, and Paul--a question Message-ID: <149.497163a6.3011e3d9@aol.com> Hi. Do you remember my big bagel theory of the cosmos, the one I came up with in 1959 when I was working at a cancer research lab? Back in 1997, it led me to predict a form of negative gravity. The following year the acceleration of the cosmos was discovered and was explained by negative gravity--dark energy. Then the Big Bagel theory allowed me to explain dark energy--negative gravity--in a unique way--as the gravitational attraction between a standard-matter universe on the bagel's topside and an anti-matter universe on the bagel's underside. Now there's more that seems to support the big bagel theory--that the universe seems directional, not randomly scattered, that the cosmos needs far more matter than it's got to explain its behavior (the big bagel theory says that two universes are on opposite sides of the same bagel--so there's twice as much stuff as we can see), etc. Does the information in the article below seem to support the big bagel? And how does Modified Newtonian Dynamics fit into this picture? Howard ___________ Retrieved July 22, 2005, from the World Wide Web http://www.newscientist.com/article.ns?id=mg18625061.800 Marcus Chown Marcus Chown is the author of The Universe Next Door published by Headline (2003) Enlarge image Evolution of the big bang Enlarge image Big bang Universe Enlarge image Cracks in the big bangWHAT if the big bang never happened? Ask cosmologists this and they'll usually tell you it is a stupid question. The evidence, after all, is written in the heavens. Take the way galaxies are scattered across the sky, or witness the fading afterglow of the big bang fireball. Even the way the atoms in your body have come into being over the eons. They are all smoking guns that point to the existence 13.7 billion years ago of an ultra-hot, ultra-dense state known as the big bang. Or are they? A small band of researchers is starting to ask the question no one is supposed to ask. Last week the dissidents met to review the evidence at the first ever Crisis in Cosmology conference in Mon??o, Portugal. There they argued that cosmologists' most cherished theory of the universe fails to explain certain crucial observations. If they are right, the universe could be a lot weirder than anyone imagined. But before venturing that idea, say the dissidents, it is time for some serious investigation into the big bang's validity and its alternatives. "Look at the facts," says Riccardo Scarpa of the European Southern Observatory in Santiago, Chile. "The basic big bang model fails to predict what we observe in the universe in three major ways." The temperature of today's universe, the expansion of the cosmos, and even the presence of galaxies, have all had cosmologists scrambling for fixes. "Every time the basic big bang model has failed to predict what we see, the solution has been to bolt on something new - inflation, dark matter and dark energy," Scarpa says. For Scarpa and his fellow dissidents, the tinkering has reached an unacceptable level. All for the sake of saving the notion that the universe flickered into being as a hot, dense state. "This isn't science," says Eric Lerner who is president of Lawrenceville Plasma Physics in West Orange, New Jersey, and one of the conference organisers. "Big bang predictions are consistently wrong and are being fixed after the event." So much so, that today's "standard model" of cosmology has become an ugly mishmash comprising the basic big bang theory, inflation and a generous helping of dark matter and dark energy. The fact that the conference went ahead at all is an important step forward, say its organisers. Last year they wrote an open letter warning that failure to fund research into big bang alternatives was suppressing free debate in the field of cosmology (New Scientist, 22 May 2004, p 20). The trouble, says Lerner, who headed the list of more than 30 signatories, is that cosmology is bankrolled by just a few sources, and the committees that control those purse strings are dominated by supporters of the big bang. Critics of the standard model of cosmology are not just uncomfortable about the way they feel it has been cobbled together. They also point to specific observations that they believe cast doubt on cosmology's standard model. ?Dark matter is turning up in places where it shouldn't exist?Take the most distant galaxies ever spotted, for example. According to the accepted view, when we observe ultra-distant galaxies we should see them in their youth, full of stars not long spawned from gas clouds. This is because light from these faraway galaxies has taken billions of years to reach us, and so the galaxies must appear as they were shortly after the big bang. But there is a problem. "We don't see young galaxies," says Lerner. "We see old ones." He cites recent observations of high-red-shift galaxies from NASA's Spitzer space telescope. A galaxy's red shift is a measure of how much the universe has expanded since it emitted its light. As the light travels through an expanding universe, its wavelength gets stretched, as if the light wave were drawn on a piece of elastic. The increase in wavelength corresponds to a shift towards the red end of the spectrum. The Spitzer galaxies have red shifts that correspond to a time when the universe was between about 600 million and 1 billion years old. Galaxies this young should be full of newborn stars that emit blue light because they are so hot. The galaxies should not contain many older stars that are cool and red. "But they do," says Lerner. Spitzer is the first telescope able to detect red stars in faraway galaxies because it is sensitive to infrared light. This means it can detect red light from stars in high-red-shift galaxies that has been pushed deep into the infrared during its journey to Earth. "It turns out these galaxies aren't young at all," says Lerner. "They have pretty much the same range of stars as present-day galaxies." And that is bad news for the big bang. Among the stars in today's galaxies are red giants that have taken billions of years to burn all their hydrogen and reach this bloated phase. So the Spitzer observations suggest that some of the stars in ultra-distant galaxies are older than the galaxies themselves, which plunges the standard model of cosmology into crisis. Fog-filled universe Not surprisingly, cosmologists have panned Lerner's theories. They put the discrepancy down to large uncertainties in estimating the ages of galaxies. But Lerner has a reply. He points to other distant objects that appear much older than they ought to be. "At high red shift, we also observe clusters and huge superclusters of galaxies," he says, arguing that it would have taken far longer than a billion years for galaxies to clump together to form such giant structures. His solution to the puzzle is extreme. Rather than being caused by the expanding universe, he believes that the red shift is down to some other mechanism. But at this stage it is only a guess. "I admit I don't know what that mechanism might be," Lerner says, "though I believe it is intrinsic to light." To test his idea, he would like to see sensitive experiments on Earth capable of detecting minute changes in light. One possibility would be to modify the LIGO detector in Hanford, Washington state. LIGO is designed to detect gravitational waves, the warps in space-time created by events such as neutron star collisions. To do this it bounces perpendicular beams of laser light hundreds of times between mirrors 4 kilometres apart, looking for subtle shifts in the beams' lengths. With a few tweaks, Lerner believes that LIGO could be modified to measure any intrinsic red-shifting that light might undergo. If the experiment ever gets the go-ahead and Lerner is proved right, the implications would be immense, not least because the tapestry of cosmology as we know it would unravel. Without an expanding universe, there would be no need to invoke dark energy to account for the apparent acceleration of that expansion. Nor would there be any reason to suppose the big bang was the ultimate beginning. "I can prove that the universe wasn't born 13.7 billion years ago," says Lerner. "The big bang never happened." However, Lerner's claims leave plenty of awkward questions. Among them is the matter of the cosmic microwave background. First detected in 1965, the vast majority of cosmologists believe that this faint, all-pervading soup of microwaves is the dying glow of the big bang, and proof of the ultimate beginning. According to big bang theory, the hot radiation that filled space after the birth of the universe has been trapped inside ever since because it has nowhere else to go. As the universe expanded over the past 13.7 billion years, the radiation has cooled to today's temperature of less than 3 kelvin above absolute zero. So if there was no big bang, where did the cosmic microwave background come from? Lerner believes that cosmologists have got the origin of the microwave glow all wrong. "If you wake up in a tent and everything around you is white, you don't conclude you've seen the start of the universe," he says. "You conclude you're in fog." Rather than coming from the big bang, Lerner believes that the cosmic background radiation is really starlight that has been absorbed and re-radiated. It is an old idea that was widely promoted by the late cosmologist and well-known big bang sceptic Fred Hoyle. He believed that starlight was absorbed by needle-like grains of iron ejected by supernovae and then radiated as microwaves. But Hoyle never found any evidence to back up his ideas and many cosmologists dismissed them. ?Some of the stars in distant galaxies appear older than the universe itself?Lerner's idea is similar, though he thinks that threads of electrically charged gas called plasma are responsible, rather than iron whiskers. Jets of plasma are squirted into intergalactic space by highly energetic galaxies known as quasars, and Lerner believes that such plasma filaments continually fragmented until they filled the universe like fog. This fog then scattered the infrared light radiated by dust that had in turn absorbed starlight. By doing so, Lerner believes, the infrared radiation became uniform in all directions, just as the cosmic microwave background appears to be. All this is possible, he argues, because standard cosmology theory has overlooked processes involving plasmas. "All astronomers know that 99.99 per cent of matter in the universe is in the form of plasma, which is controlled by electromagnetic forces," he says. "Yet all astronomers insist on believing that gravity is the only important force in the universe. It is like oceanographers ignoring hydrodynamics." To make progress, Lerner is calling for theories that include plasma phenomena as well as gravity, and for more rigorous testing of theory against observations. Of course, Lerner's ideas are extremely controversial and few people are convinced, but that doesn't stop other researchers questioning the standard theory too. They have their own ideas about what is wrong with it. In Scarpa's case, the mysterious dark matter is at fault. Dark matter has become an essential ingredient in cosmology's standard model. That's because the big bang on its own fails to describe how galaxies could have congealed from the matter forged shortly after the birth of the universe. The problem is that gas and dust made from normal matter were spread too evenly for galaxies to clump together in just 13.7 billion years. Cosmologists fix this problem by adding to their brew a vast amount of invisible dark matter which provides the extra tug needed to speed up galaxy formation. The same gravitational top-up helps to explain the rapid motion of outlying stars in galaxies. Astronomers have measured stars orbiting their galactic centres so fast that they ought to fly off into intergalactic space. But dark matter's extra gravity would explain how the galaxies hold onto their speeding stars. Similarly, dark matter is needed to explain how clusters of galaxies can hold on to galaxies that are orbiting the cluster's centre so fast they ought to be flung away. But dark matter may not be the cure-all it seems, warns Scarpa. What worries him are inconsistencies with the theory. "If you believe in dark matter, you discover there is too much of it," he says. In particular, his observations point to dark matter in places cosmologists say it shouldn't exist. One place no one expects to see it is in globular clusters, tight knots of stars that orbit the Milky Way and many other galaxies. Unlike normal matter, the dark stuff is completely incapable of emitting light or any other form of electromagnetic radiation. This means a cloud of the stuff cannot radiate away its internal heat, a process vital for gravitational contraction, so dark matter cannot easily clump together at scales as small as those of globular clusters. Scarpa's observations tell a different story, however. He and his colleagues have found evidence that the stars in globular clusters are moving faster than the gravity of visible matter can explain, just as they do in larger galaxies. They have studied three globular clusters, including the Milky Way's biggest, Omega Centauri, which contains about a million stars. In all three, they find the same wayward behaviour. So if isn't dark matter, what is going on? Scarpa's team believes the answer might be a breakdown of Newton's law of gravity, which says an object's gravitational tug is inversely proportional to the square of your distance from it. Their observations of globular clusters suggest that Newton's inverse square law holds true only above some critical acceleration. Below this threshold strength, gravity appears to dissipate more slowly than Newton predicts. Exactly the same effect has been spotted in spiral galaxies and galaxy-rich clusters. It was identified more than 20 years ago by Mordehai Milgrom at the Weizmann Institute in Rehovot, Israel, who proposed a theory known as modified Newtonian dynamics (MOND) to explain it. Scarpa points out that the critical acceleration of 10-10 metres per second per second that was identified for galaxies appears to hold for globular clusters too. And his work has led him to the same conclusion as Milgrom: "There is no need for dark matter in the universe," says Scarpa. It is a bold claim to make. And not surprisingly, MOND has had plenty of critics over the years. One of cosmologists' biggest gripes is that MOND is not compatible with Einstein's theory of relativity, so it is not valid for objects travelling close to the speed of light or in very strong gravitational fields. In practice, this means MOND has been powerless to make predictions about pulsars, black holes and, most importantly, the big bang. But this has now been fixed by Jacob Bekenstein at the Hebrew University of Jerusalem in Israel. Bekenstein's relativistic version of the theory already appears to be bearing fruit. In May a team led by Constantinos Skordis of the University of Oxford showed that relativistic MOND can make cosmological predictions. The researchers have reproduced both the observed properties of the cosmic microwave background and the distribution of galaxies throughout the universe (www.arxiv.org/abs/astro-ph/0505519). Gravity in crisis Scarpa believes that MOND is a crucial body blow for the big bang. "It means that the law of gravity from which we derive the big bang is wrong," he says. He insists that cosmologists are interpreting astronomical observations using the wrong framework. And he urges them to go back to the drawing board and derive a cosmological model based on MOND. For now, his plea seems to be falling mostly on deaf ears. Yet there is more evidence that there could be something wrong with the standard model of cosmology. And it is evidence that many cosmologists are finding harder to dismiss because it comes from the jewel in the crown of cosmology instruments, the Wilkinson Microwave Anisotropy Probe. "It could be telling us something fundamental about our universe, maybe even that the simplest big bang model is wrong," says Jo?o Magueijo of Imperial College London. Since its launch in 2001, WMAP has been quietly taking the temperature of the universe from its vantage point 1.5 million kilometres out in space. The probe measures the way the temperature of the cosmic microwave background varies across the sky. Cosmologists believe that the tiny variations from one place to another are an imprint of the state of the universe about 300,000 years after the big bang, when matter began to clump together under gravity. Hotter patches correspond to denser regions, and cooler patches reflect less dense areas. These density variations began life as quantum fluctuations in the vacuum in the first split second of the universe's existence, and were subsequently amplified by a brief period of phenomenally fast expansion called inflation. Because the quantum fluctuations popped up at random, the hot and cold spots we see in one part of the sky should look much like those in any other part. And because the cosmic background radiation is a feature of the universe as a whole rather than any single object in it, none of the hot or cold regions should be aligned with structures in our corner of the cosmos. Yet this is exactly what some researchers are claiming from the WMAP results. Earlier this year, Magueijo and his Imperial College colleague Kate Land reported that they had found a bizarre alignment in the cosmic microwave background. At first glance, the pattern of hot and cold spots appeared random, as expected. But when they looked more closely, they found something unexpected. It is as if you were listening to an anarchic orchestra playing some random cacophony, and yet when you picked out the violins, trombones and clarinets separately, you discovered that they are playing the same tune. Like an orchestral movement, the WMAP results can be analysed as a blend of patterns of different spatial frequencies. When Magueijo and Land looked at the hot and cold spots this way, they noticed a striking similarity between the individual patterns. Rather than being spattered randomly across the sky, the spots in each pattern seemed to line up along the same direction. With a good eye for a newspaper headline, Magueijo dubbed this alignment the axis of evil. "If it is true, this is an astonishing discovery," he says. ?Without an expanding universe, the big bang was not the ultimate beginning?That's because the result flies in the face of big bang theory, which rules out any such special or preferred direction. So could the weird effect be down to something more mundane, such as a problem with the WMAP satellite? Charles Bennett, who leads the WMAP mission at NASA's Goddard Space Flight Center in Greenbelt, Maryland, discounts that possibility. "I have no reason to think that any anomaly is an artefact of the instrument," he says. Another suggestion is that heat given off by the Milky Way's dusty disk has not been properly subtracted from the WMAP signals and mimics the axis of evil. "Certainly there are some sloppy papers where insufficient attention has been paid to the signals from the Milky Way," warns Bennett. Others point out that the conclusions are based on only one year's worth of WMAP signals. And researchers are eagerly awaiting the next batch, rumoured to be released in September. Yet Magueijo and Land are convinced that the alignment in the patterns does exist. "The big question is: what could have caused it," asks Magueijo. One possibility, he says, is that the universe is shaped like a slab, with space extending to infinity in two dimensions but spanning only about 20 billion light years in the third dimension. Or the universe might be shaped like a bagel. Another way to create a preferred direction would be to have a rotating universe, because this singles out the axis of rotation as different from all other directions. Bennett admits he is excited by the possibility that WMAP has stumbled on something so important and fundamental about the universe. His hunch, though, is that the alignment is a fluke. "However, it's always possible the universe is trying to tell us something," he says. Clearly, such a universe would flout a fundamental assumption of all big bang models: that the universe is the same in all places and in all directions. "People made these assumptions because, without them, it was impossible to simplify Einstein's equations enough to solve them for the universe," says Magueijo. And if those assumptions are wrong, it could be curtains for the standard model of cosmology. That may not be a bad thing, according to Magueijo. "The standard model is ugly and embarrassing," he says. "I hope it will soon come to breaking point." But whatever replaced it would of course have to predict all the things the standard model predicts. "This would be very hard indeed," concedes Magueijo. Meanwhile the axis of evil is peculiar enough that Bennett and his colleague Gary Hinshaw have obtained money from NASA to carry out a five-year exhaustive examination of the WMAP signals. That should exclude the possibilities of the instrumental error and contamination once and for all. "The alignment is probably just a fluke but I really feel compelled to investigate it," he says. "Who knows what we will find." Lerner and his fellow sceptics are in little doubt: "What we may find is a universe that is very different than the increasingly bizarre one of the big bang theory." From issue 2506 of New Scientist magazine, 02 July 2005, page 30 ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Fri Jul 22 06:17:44 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Fri, 22 Jul 2005 08:17:44 +0200 (GMT+02:00) Subject: [Paleopsych] NYT: Brain-Dead Woman's Fetus Passes Milestone in Development Message-ID: <29994842.1122013065298.JavaMail.root@wamui-bichon.atl.sa.earthlink.net> If we are so concerned about the life you discuss, why are we so unconcerned about the 25,000 plus lives that have been destroyed in Iraq since we started the war? -----Original Message----- From: "Lynn D. Johnson, Ph.D." Sent: Jul 22, 2005 3:51 AM To: The new improved paleopsych list Subject: Re: [Paleopsych] NYT: Brain-Dead Woman's Fetus Passes Milestone in Development Frank, you ask touching questions "[What medical knowledge will be gained from this? Is that knoweldge worth the price?]". Thank you for raising them. Embedded is the humanistic assumption that life has to be intelligent and aware in order to be worth living. Humanists seem to have a horror of living in a coma-like state. The religious apologist would reply that it is not proper for us to judge the value of a life, if we do not wish to be judged. It doesn't matter, says the apologist, if important if knowledge is gained, and the price of a healthy human baby is of infinite value. The worth is there. One might also ask for reflection of the slippery slope about the issue of which life is worth living. Eugenics and "mercy" killing (really, killing those who are inconvenient for us) lurk. My cousin was born with Down Syndrome 50 years ago and my aunt and uncle were told to put him away and forget about him. His life, the doctors said, wasn't worth living. Instead they brought him home and nurtured him. When they tried to enroll him in school, he was refused, so my uncle ran for the local school board, became president, and began one of the nation's first programs to mainstream retarded children. Today my cousin is married to a girl with down syndrome, he works at a job he enjoys, and the two of them have a happy life. He knows more about the NFL than I do, and can play the piano (not well, mind you). Because of him, many retarded children in my little home town were educated to the best of their capacity, long before your agency became involved. So one never knows where the ripples of a life well nurtured will end. Lynn Premise Checker wrote: > Brain-Dead Woman's Fetus Passes Milestone in Development > http://www.nytimes.com/2005/07/21/health/21fetus.html > > [What medical knowledge will be gained from this? Is that knoweldge > worth the price?] > > By THE ASSOCIATED PRESS > > RICHMOND, Va., July 20 (AP) - A brain-dead pregnant woman on life > support has passed the milestone in her pregnancy where doctors > believe the baby could realistically survive outside the womb, giving > her family renewed hope. > > -snip- _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From checker at panix.com Fri Jul 22 19:07:26 2005 From: checker at panix.com (Premise Checker) Date: Fri, 22 Jul 2005 15:07:26 -0400 (EDT) Subject: [Paleopsych] Skeptical Inquirer: One Longsome Argument Message-ID: One Longsome Argument Skeptical Inquirer March/April 2005 http://www.csicop.org/si/2005-03/evolution.html By any objective measure, the evolution of species ranks among the most successful scientific theories ever. So why is the message not getting through? Dennis R. Trumble _________________________________________________________________ Charles Darwin liked to describe the origin of species as "one long argument," but his extensive treatise in support of biological evolution now seems painfully brief compared to the argument that has followed in its wake. Indeed, never in the history of science has a more prolonged and passionate debate dogged the heels of a theory so thoroughly researched and repeatedly validated. And the end is nowhere in sight. Despite all evidence to the contrary, a large portion of the world's population continues to cling to the belief that human beings are fundamentally different from all other life forms and that our origins are unique. It's a lovely sentiment to be sure, but how is it that so many people continue to be drawn to this thoroughly discredited notion? Like most mystic mindsets, creationist beliefs are normally instilled at an early age, nurtured by well-meaning parents and sustained by religious organizations whose vested leaders are traditionally loath to amend church doctrine in the face of emergent scientific facts. Though seemingly antithetic to the inquisitive nature of our species, the rote acceptance of received wisdom has been a hallmark of human culture almost from the get-go, arising initially as a benign behavioral adaptation geared to promote the rapid transfer of communal survival skills to our young hominid forebears. It was only with the advent of modern civilization that this age-old habit finally began to outlive its usefulness and yield serious negative consequences-most notably by granting gratuitous momentum to all kinds of ill-conceived notions about how the world is "supposed" to work. Today, this surge of ideological inertia remains a surprisingly powerful force, pushing beliefs as impossibly anachronistic as geocentrism and flat-Earth cosmology past the ramparts of the enlightenment to foul the fringes of modern thought. Fortunately, unlike the veiled forces that impart momentum to particles of mass, the impulse that propels incongruous ideas from one generation to the next is fairly transparent at its base. After all, youngsters imprinted with self-flattering beliefs are understandably reluctant to amend them later in life owing to the special status and privileges they bestow. And once someone has grown accustomed to the hollow pleasures of this egocentric world view, it's easy to see how these inflated beliefs would come to be shielded from the prickly barbs of reason by a panoply of family, friends and other like-minded folks, all of whom harbor the same inscrutable notions (mystery loves company). Although this perpetual pattern of natal indoctrination and communal reassurance does not begin to encompass the full psychosocial breadth of this phenomenon-especially where adult converts are concerned-it does go a long way toward explaining the inordinate longevity of creationist mythology and why so many intelligent, well-educated, and otherwise rational people appear unable to step back and examine certain beliefs with a critical eye. Because creationist beliefs are both deeply rooted and profoundly comforting, it isn't hard to understand why certain people feel compelled to enlist any and all means at their disposal to discredit Darwin's theory. Nor is it difficult to imagine the sense of frustration they must feel when repeatedly told by scientists that their arguments are fundamentally flawed. Problem is, most folks-including many of the more learned among us-don't understand the basic workings of science well enough to appreciate how feeble the arguments against evolution really are. If they did, they would realize that the scientific process is not about gathering data to prove a favored hypothesis but instead involves the testing of ideas against the totality of real-world observations. Creationists turned amateur scientists almost always fail to grasp this essential scientific precept and so unwittingly launch from false premises all kinds of pseudoscientific arguments in support of special creation. In fact, if there's one reason why creationist critiques are so consistently misguided it's that adherents generally presuppose that special creation is true and then sift the evidence for clues to support that supposition-a recipe for self-deception that stands in stark contrast to the scientific method, which mandates that fresh hypotheses be derived from all available evidence. Were this fundamental misconception to be extinguished in a sudden wave of scientific literacy, the illusory evidence that thinking creationists use to anchor their beliefs would be swept away in an instant, leaving precious little demand for the writings of creation "scientists." As it is, however, an ungodly amount of literature is being published by the sectarian faithful in a spirited attempt to preserve mankind's privileged place in the grand scheme of things. Whether knowingly or not, creationists of every stripe have come to rely on an assortment of pseudoscientific arguments to legitimize their efforts to unravel the fabric of evolutionary theory, hoping against hope that the extensive tapestry woven by seven generations of scientists might somehow dissolve with the tug of a few loose threads. Unfortunately, as the weave of evolutionary theory has continued to tighten and expand, the number and variety of confused arguments in defense of creationism and intelligent design have only risen to keep pace. [t1.jpg] One popular approach enlisted by creation "scientists" is the classic all-or-nothing argument wherein proponents claim that nothing in science can be known with confidence until every last detail is described with absolute certainty. Appealing largely to those unschooled in the scientific method, critics point to such nonissues as gaps in the fossil record, poorly understood aspects of gene function, and the mystery of life's origins as reasons to view evolutionary theory as speculative or provisional. What they fail to appreciate is that scientific theories are built solely upon evidence that is actually available for study and so cannot be refuted by speculation regarding those clues that remain hidden. As long as a theory remains consistent with observed phenomena and yields valid predictions, it must be considered a viable explanation regardless of what remains to be discovered. Thus, it is entirely irrelevant that gaps in the fossil record exist, but vitally important that those fossils that do exist make sense in the context of evolution. A single hominid fossil found among the trilobites of the Burgess Shale, for instance, would immediately throw Darwin's theory into doubt. Likewise, the fact that certain aspects of molecular genetics remain to be fully described in no way negates the fact that the substantial amount that is known about gene function is entirely consistent with evolution as we understand it today. Yet despite the proverbial admonition against doing so, many still view the absence of evidence as evidence of absence and remain all too eager to fill this fictional void with the narrative of their choosing. Indeed, this particular brand of argumentum ad ignorantiam has long been a mainstay for creationists looking to wedge their cosmology between the narrowing gaps of scientific knowledge (an increasingly difficult task). But issues of legitimacy aside, because this fallacy has sired so many specious claims over the years it seems only fitting that the mother of all such "arguments to ignorance" should stem from the granddaddy of all biological data gaps: the evolution of single-celled life forms. Because no physical body of evidence exists to document the beginning of life on Earth, this information gap has proven to be a wildly popular (albeit wholly inappropriate) foil for those seeking to discredit evolutionary theory. In truth, the origin of life is an issue entirely separate from the origin of species, rendering this otherwise important question utterly irrelevant as far as the veracity of natural selection is concerned. Whether the first primitive life form arose from known physical processes or was somehow willed into being through means beyond our understanding, evidence that all life on Earth descended from simple primordial beings remains just as compelling, and the myth of independent creation just as untenable. But even this slender refuge for creationist sentiment has now begun to evaporate under the light of modern scientific scrutiny, for although Earth's original life forms left no physical evidence for scientists to examine, credible hypotheses regarding the spontaneous formation and assembly of self-replicating molecules have been proposed and tested nonetheless. Laboratory experiments and astronomic observations suggest that key organic compounds were present in abundance shortly following Earth's formation and that natural chemical affinities and mineral scaffolds may have acted in concert to produce the simplest of biochemical copying machines. In 1953, Stanley Miller became the first to demonstrate that amino acids and other organic molecules could have formed through chemical means in prebiotic oceans capped with an atmosphere of ammonia, methane, and hydrogen gas. Although geochemists now question Miller's assumptions regarding the reducing power of the prebiotic atmosphere (Bada 2003), reducing environments may well have existed in isolated pockets on the embryotic Earth (near volcanic vents for instance). Moreover, many of these same organic compounds have been found to exist among interstellar dust clouds and meteorites, suggesting that life's building blocks may have been delivered to Earth on the backs of icy comets and carbonaceous asteroids. Based on these and other findings, biochemists have proposed several plausible mechanisms by which these compounds may have coalesced of their own accord into the precursors of life. Experiments confirm that layered mineral deposits can attract, concentrate, and link organic molecules and that certain clays may function as scaffolding for assembling the molecular components of RNA (Hazen 2001). Crystalline templates have also been proposed as possible means of primitive protein assembly, their mirror-image surface structure accounting for the curious predominance of "left-handed" amino acids found in all creatures living today. These and other minerals have also been shown to facilitate the sequence of chemical transformations needed to spark life, acting as sheltered containers (feldspar), catalysts (magnetite), and iron sulfide reactants (pyrite). What's more, a complex mixture of organic compounds formed within simulated interstellar ices has recently been observed to spontaneously form cell-like vessels when immersed in water (Dworkin 2001), providing yet another viable mechanism by which particles awash in a dilute prebiotic soup might have assembled themselves into crude cells. Although the precise sequence of events will never be known with absolute certainty, these and similar experiments strongly suggest that the earliest terrestrial life forms arose spontaneously and in accordance with the known laws of nature. In short, everything we have come to understand about our world suggests that living creatures are a natural consequence of the laws that govern the physical universe-no more anomalous than the matter they comprise or the space they occupy. Yet despite all efforts to disseminate this hard-earned knowledge, a broad swath of creationist sentiment lingers on, fueled by well-worn arguments ranging from the philosophical and dogmatic to the confused and plain disingenuous. The great majority of these objections, however, quickly collapse under even the most cursory examination. Many of the "scientific" arguments for intelligent design, for instance, invoke common misconceptions about how the physical world really works, as in the classic "watchmaker" argument wherein nature is assumed to act randomly and possess no organizational tendencies. Given this false premise, it is a simple matter to show that complex molecular structures could never have formed by chance alone any more than a factory whirlwind could assemble a Mercedes Benz from its component parts. But anyone with a basic understanding of chemistry knows full well that such analogies do not apply to atoms and molecules. If the physical sciences have taught us nothing else, it's that the world of the very small is surprisingly counterintuitive. Processes in the realm of the microscopic simply do not behave as one might expect based on our experience living on the macroscopic plane. Electric charges, energy barriers, and nuclear forces all dominate the realm of the minuscule and compel individual atoms to form stable chemical bonds with neighboring elements, blindly building molecular structures of every possible type and complexity that the laws of physical chemistry will allow. Objects large enough to arouse our naked senses, on the other hand, behave quite differently. Because they exhibit no special affinity for one another, the scattered components of a disassembled watch will never coalesce of their own accord-the odds against such haphazard assemblies are simply too long. Nature, however, does not act without organizational tendencies nor are living organisms randomly assembled. There is now ample reason to believe that simple unicellular life forms arose through processes endemic to the life-friendly universe we occupy and that more sophisticated beings slowly emerged from these modest beginnings. Indeed, all complex organisms on Earth (including humans) begin life as single cells that multiply, differentiate, and ultimately mature to assume the form of its parent-all in strict accordance with the natural laws of biochemistry. The contention that evolution somehow violates the second law of thermodynamics is another popular fiction that has endured through widespread confusion over a fundamental physical concept-in this case, thermodynamic entropy. Couched in the plainest possible terms, the second law simply states that energy tends to spread from areas where it is concentrated to areas where it is not. Although it is not widely recognized, this phenomenon is an integral part of our everyday experience and shapes our commonsense expectations. Because energy always flows from where it is concentrated to where it is more diffuse, we expect, say, a warm bottle of Gewrztraminer to chill when lowered into a bucket of ice water. In this instance, thermal energy will flow from the tepid wine to the surrounding fluid until both reach a common temperature and an energetic balance is achieved. Like the ice bucket and its contents, self-contained systems receiving no external energy will always experience a net increase in the diffusion of thermal energy, or a rise in thermodynamic entropy, resulting in lower energy gradients and less potential to do work. Regrettably, this same term has also come to be used in a statistical context involving the distribution of particles placed in random motion within a closed system-a situation that has bred a great deal of confusion. Unlike thermodynamic entropy, which defines energy distributions, "logical" entropy describes the probability that randomly distributed particles will assume a certain configuration or organized pattern. Ordered systems with low entropy values may appear to the casual observer to contain discernable patterns whereas high entropy systems seem more disorganized. Gas molecules distributed within an enclosure, for example, are said to exhibit greater entropy when they are scattered than when they are grouped together. Why? Because although every possible pattern of molecules has an equal chance of occurring, there are a great many more ways to define a diffuse pattern than any given clumped arrangement and, as physicist Richard Feynman was keen to observe, logical entropy is simply "the logarithm of that number of ways." Despite the fact that thermodynamic and logical entropy are wholly independent concepts, many laymen-and a few scientists who really should know better-have nonetheless come to confuse and intermingle the two, transforming the second law of thermodynamics into a fictitious "law of disorder" that ostensibly explains why all material things decay and fall apart. In truth this has nothing to do with the second law of thermodynamics and even misuses the concept of logical entropy in that it attempts to explain large-scale phenomena. There is, in fact, no such universal mandate of decay that precludes the spontaneous formation of complex assemblages. Just because all complex systems will eventually break down as energy throughout the cosmos becomes evenly distributed doesn't mean that some interesting patterns can't take shape in the meantime. Those who argue this point from a purely energetic standpoint are somewhat less confused but just as easily refuted. The fact that the amount of energy available to do work must always decrease in a closed system would indeed be a serious impediment to the evolution of life if our planet were isolated from all external energy sources, but one need look no further than our companion star to see that such is not the case. Energy is constantly being delivered to the thin shell of our biosphere both from above, in the form of sunlight, and below, via heat generated by Earth's radioactive core, providing ample energy to fuel the assembly of structured molecules. Moreover, while it is true that the overall entropy of an isolated system cannot decrease, the entropy of certain parts of a system can, and often do, spontaneously decrease at the expense of even greater increases in adjacent regions, as with the formation of crystalline salts and snowflakes. Besides that, millions of chemical compounds including water, cholesterol, and DNA actually carry less energy than the elements they contain (possessing "negative energies of formation" in scientific parlance). In these cases, the second law of thermodynamics actually favors the impromptu formation of complex structured molecules due to their tendency to disperse energy as they coalesce. Another threadbare canard spread by the creationist camp is that biological evolution is still not widely accepted within the scientific community-a ruse for which competing evolutionary hypotheses are offered up as evidence. The truth of the matter is quite the opposite. The fact that biologists support alternate hypotheses regarding specific evolutionary mechanisms no more challenges the reality of evolution than Einstein's relativistic views threatened the existence of gravity. Whether evolution proceeds in fits and starts as envisioned by the punctuated equilibrium model or progresses with more stately regularity, each competing hypothesis simply seeks to explain a certain aspect of evolution in a plausible way. The overarching framework of evolution itself, however, remains astonishingly consistent with the huge body of evidence accumulated to date. Far from being the object of scientific debate, the evolution of species is actually no more, and no less, than the collection of observed facts that these hypotheses are meant to explain. Gene flow, frequency dependence, and punctuated equilibrium are but three possible mechanisms put forward to explain the nature of this overarching phenomenon. Which, if any, of these hypotheses survive the test of time bears no influence on whether modern species are the product of biological evolution-the evidence in this regard, now comprising countless independent observations, is simply overwhelming. It is only the processes that drive the phenomenon of evolution that remain the object of scientific scrutiny. Unencumbered by the rules of scientific inquiry, others proclaim with total aplomb that evolution can never be truly validated until major speciation events (the transformation of land mammals into whales for instance) are observed directly. In this case, what is ignored is the important fact that reliable scientific evidence is not limited to firsthand experience of real-time events but includes all forms of physical clues. The folly of this argument becomes evident when one considers that knowledge of galaxy formation, stellar composition, and subatomic particles would be impossible if researchers were to adopt similar rules of evidence across the whole of science. But why stop at the boundaries of academia? Imagine for a moment the chaos that would ensue within the criminal justice system if such an unreasonable burden of proof were placed on prosecutors! Indeed, as many jurors would no doubt attest, it is often the physical evidence that proves most compelling in a court of law, eclipsing even eyewitness accounts that can be tainted by errors of interpretation or outright deceit. Beliefs maintained through the narrow interpretation of isolated facts or held in default against evidence not readily understood can be called any number of things, but "scientific" is certainly not one of them. As these few examples illustrate, the myriad approaches adopted by creation "scientists" in their attempts to undermine evolutionary theory are indeed quite creative but hardly scientific. As has been demonstrated time and again, evidence carefully sifted can be enlisted to endorse practically any supposition so long as the preponderance of contrary clues are ignored and the rules of sound scientific practice are suspended. It is precisely this brand of exclusionary thinking that enables young-Earth devotees to dismiss mountains of physical evidence while defending their assertions with such flawed assumptions as constant population growth and the linear decay of Earth's magnetic field (both demonstrably false). Likewise, partisans who claim that evolutionary processes have never actually been observed inexplicably dismiss the scientific literature where such observations have been reported in abundance. In truth, physical adaptations to environmental pressures have been documented in hundreds of modern species from bacteria and fruit flies to birds, squirrels, and stickleback fish (Pennisi 2000). Even Darwin's own finches have been caught in the act of adaptation thanks to decades of meticulous study spearheaded by Princeton biologists Peter and Rosemary Grant. A full accounting of the ways in which the scientific method has been manipulated to promote creationist sentiment would doubtless occupy many volumes, but in no instance has a legitimate scientific case ever been made to countermand the notion that, as Darwin phrased it: "from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved." References: 1. Bada, Jeffrey L., and Antonio Lazcano. 2003. Prebiotic soup-revisiting the Miller experiment. Science 300:745-746. 2. Dworkin, Jason P., David W. Deamer, Scott A. Sandford, and Louis J. Allamandola. 2001. Self-assembling amphiphilic molecules: Synthesis in simulated interstellar/precometary ices. Proceedings of the National Academy of Sciences 98(3): 815-819. 3. Hazen, Robert M. 2001. Life's rocky start. Scientific American April: 77-85. 4. Pennisi, Elizabeth. 2000. Nature steers a predicable course. Science 278: 207-208. In This Issue * [42]Buy this back issue * [43]One Longsome Argument * [44]Hyperbole in Media Reports on Asteroids and Impacts * [45]The Bizarre Columbia University 'Miracle' Saga Continues * [46]'Stupid Dino Tricks': A Reply to Hovind's Web Response * [47]Was a Quack Doctor Jack the Ripper? About the Author Dennis R. Trumble is Senior Bioengineer in the Department of Cardiothoracic Surgery at Allegheny General Hospital (Pittsburgh, Pennsylvania) and Research Instructor of Surgery at Drexel University College of Medicine. Correspondence may be addressed to D.R. Trumble, Cardiac Surgery Research, Allegheny General Hospital, 320 East North Avenue, Pittsburgh, PA, 15212. References 43. http://www.csicop.org/si/2005-03/evolution.html 44. http://www.csicop.org/si/2005-03/asteroids.html 45. http://www.csicop.org/si/2005-03/miracle-study.html 46. http://www.csicop.org/si/2005-03/hovind.html 47. http://www.csicop.org/si/2005-03/strange-world.html 48. http://www.csicop.org/q/csicop/evolution From checker at panix.com Fri Jul 22 19:07:35 2005 From: checker at panix.com (Premise Checker) Date: Fri, 22 Jul 2005 15:07:35 -0400 (EDT) Subject: [Paleopsych] CHE: Filmmaker Starts Foundation to Help Students Chill Out -- With Transcendental Meditation Message-ID: Filmmaker Starts Foundation to Help Students Chill Out -- With Transcendental Meditation News bulletin from the Chronicle of Higher Education, 5.7.21 http://chronicle.com/prm/daily/2005/07/2005072104n.htm By ERIC HOOVER In the late 1960s, college students closed their eyes, expanded their minds, and made meditation popular on campuses. Now David Lynch wants them to do it again. Mr. Lynch, director of the films Blue Velvet, Eraserhead, and Mulholland Drive, is scheduled to announce today the establishment of the David Lynch Foundation for Consciousness-Based Education and World Peace, an organization that will promote the mental and physical benefits of Transcendental Meditation. Mr. Lynch, who says he has shut his eyes and entered the "field of oneness" twice each day for 32 years, plans to begin a speaking tour of 50 colleges and universities this fall to tout meditation as a tool for overcoming anxiety and stress. "It's an ingredient that's missing from education," says Mr. Lynch. The Indian guru Maharishi Mahesh Yogi introduced Transcendental Meditation, known as "diving within," a half-century ago. The mental technique is practiced silently for 15 to 20 minutes twice a day. Researchers have found that it can reduce high blood pressure and improve brain function, among other health benefits. Proponents of the practice say that meditating also can improve students' academic performances and foster their creativity. Students who meditate achieve a state of "relaxed alertness" that helps them complete assignments more easily, according to William R. Stixrud, a clinical neuropsychologist based in Silver Spring, Md., and a member of the foundation's Board of Advisers. Mr. Lynch's own descriptions of Transcendental Meditation are no less imaginative than his films, in which time does strange things and symbols leap out of the unconscious. Meditating, the director says, is a "dive into pure creativity" that has helped him realize his artistic potential, a portal to the source of "love, consciousness, creativity, and power" that, in a mere two weeks, transformed him from an angry man to a happy fellow. "It is this light you turn on," he says, "that gets rid of negativity." Mr. Lynch believes that frazzled high-school and college students are in need of such a light because of academic pressures, fatigue, and stress. He imagines a world in which each student has a class period a day to experience silence and bliss. His foundation, which he is establishing with his own money, intends to finance meditation classes for students, as well as institutional research on the physiological effects of the technique. Bob Roth, the foundation's program director and a meditation instructor, says the group will seek to raise additional funds from the entertainment industry and philanthropic groups to help fulfill its mission: to ensure that every child and young adult who wants to learn Transcendental Meditation can do so. The foundation will provide funds for some students to learn the technique and receive follow-up training, Mr. Roth says. Even more ambitious is the foundation's plan to raise $7-billion to help establish seven affiliated Universities of World Peace that would train students to become "professional peacemakers." But that is a long-term goal, Mr. Lynch concedes. Closer to the present, he predicts that college campuses are primed for a Transcendental Meditation revival. "Some students will say, That's baloney, but others will say, I've got to have that," Mr. Lynch says. "But first they have to hear about it." College students, who already possess an array of pharmacological treatments for depression and anxiety disorders, may need convincing that taking a timeout twice a day could help them overcome such problems -- or that sitting quietly might do more to soothe them than their favorite alcoholic beverage could. Then again, a generation that grew up to believe in "the force" from Star Wars may just warm to the message that they can influence their own destiny by looking inward. _________________________________________________________________ Background articles from The Chronicle: * [68]The Spokesman Who Kept Calling (4/22/2005) Opinion: * [69]A Neo-Noir Filmmaker Echoes a Philosopher's Quest for Truth (11/14/2003) References 68. http://chronicle.com/weekly/v51/i33/33a05601.htm 69. http://chronicle.com/weekly/v50/i12/12b01401.htm E-mail me if you have problems getting the referenced articles. From checker at panix.com Fri Jul 22 19:08:01 2005 From: checker at panix.com (Premise Checker) Date: Fri, 22 Jul 2005 15:08:01 -0400 (EDT) Subject: [Paleopsych] Science: Mapping the large-scale structure of the universe Message-ID: Mapping the large-scale structure of the universe http://www.sciencemag.org/cgi/content/full/309/5734/564 Science, Vol 309, Issue 5734, 564-565 , 22 July 2005 [DOI: 10.1126/science.1115128] [Thanks to Eugen for this.] Mapping the Large-Scale Structure of the Universe David H. Weinberg* In a large-scale view of the universe, galaxies are the basic unit of structure. A typical bright galaxy may contain 100 billion stars and span tens of thousands of light-years, but the empty expanses between the galaxies are much larger still. Galaxies are not randomly distributed in space, but instead reside in groups and clusters, which are themselves arranged in an intricate lattice of filaments and walls, threaded by tunnels and pocked with bubbles. Two ambitious new surveys, the Two-Degree Field Galaxy Redshift Survey (2dFGRS) and the Sloan Digital Sky Survey (SDSS), have mapped the three-dimensional distribution of galaxies over an unprecedented range of scales (1, 2). Astronomers are using these maps to learn about conditions in the early universe, the matter and energy contents of the cosmos, and the physics of galaxy formation. Galaxies and large-scale structure form as a result of the gravitational amplification of tiny primordial fluctuations in the density of matter. The inflation hypothesis ascribes the origin of these fluctuations to quantum processes during a period of exponential expansion that occupied the first millionth-of-a-billionth-of-a-trillionth of a second of cosmic history. Experiments over the last decade have revealed the imprint of these fluctuations as part-in-100,000 intensity modulations of the cosmic microwave background (CMB), which records the small inhomogeneities present in the universe half a million years after the big bang. Although the visible components of galaxies are made of "normal" baryonic matter (mostly hydrogen and helium), the gravitational forces that drive the growth of structure come mainly from dark matter, which is immune to electromagnetic interactions. By combining precise, quantitative measurements of present-day galaxy clustering with CMB data and other cosmological observations, astronomers hope to test the inflation hypothesis, to pin down the physical mechanisms of inflation, to measure the amounts of baryonic and dark matter in the cosmos, and to probe the nature of the mysterious "dark energy" that has caused the expansion of the universe to accelerate over the last 5 billion years. The 2dFGRS, completed in 2003, measured distances to 220,000 galaxies, and the SDSS is now 80% of the way to its goal of 800,000 galaxies (see the figure). The key challenge in interpreting the observed clustering is the uncertain relation between the distribution of galaxies and the underlying distribution of dark matter. If the galaxy maps are smoothed over tens of millions of lightyears, this relation is expected to be fairly simple: Variations in galaxy density are constant multiples of the variations in dark matter density. Quantitative analysis in this regime has focused on the spatial power spectrum, which characterizes the strength of clustering on different size scales (3, 4). The power spectrum describes the way that large, intermediate, and small structures--like the mountain ranges, isolated peaks, and rolling hills of a landscape--combine to produce the observed galaxy distribution. The shape of the dark matter power spectrum is a diagnostic of the inflation model, which predicts the input spectrum from the early universe, and of the average dark matter density, which controls the subsequent gravitational growth. Recent analyses have also detected subtle modulations of the power spectrum caused by baryonic matter, which undergoes acoustic oscillations in the early universe because of its interaction with photons (4, 5). To go further, one would like to know the precise amplitude of dark matter clustering, not just the variation of clustering with scale. Unfortunately, the factor relating galaxy and dark matter densities depends on aspects of galaxy formation that are difficult to model theoretically. One observational approach to isolating dark matter clustering uses weak gravitational lensing, in which the dark matter surrounding nearby galaxies subtly distorts the apparent shapes of the distant galaxies behind them. Another approach uses the relative probabilities of different triangular configurations of galaxy triples, picking out the characteristic filamentary geometry produced by anisotropic gravitational collapse. Applications of these two methods to the SDSS and the 2dFGRS, respectively, imply that the clustering strength of bright, Milky Way-type galaxies is similar to that of the underlying dark matter (6, 7). Figure 1 The big picture. Large-scale structure in the SDSS. The SDSS uses a mosaic charge-coupled device camera to image regions of the sky and a fiber-fed spectrograph to measure distances of galaxies selected from these images. The main panel shows the SDSS map of 67,000 galaxies that lie within 5? of the equatorial plane; the region of sky obscured by the Milky Way is not mapped. Each wedge is 2 billion light-years in extent. Galaxies are color-coded by luminosity, and more luminous galaxies can be seen to greater distances. (Inset) SDSS image of a cluster of galaxies, showing a region roughly 1 million light-years on a side. CREDIT: IMAGES COURTESY OF THE SLOAN DIGITAL SKY SURVEY COLLABORATION On smaller scales, the relation between galaxies and dark matter becomes more complex, and it is different for different types of galaxies. Redder galaxies composed of older stars reside primarily in clusters, the dense urban cores of the galaxy distribution. Younger, bluer galaxies populate the sprawling, filamentary suburbs. Current efforts to model galaxy clustering in this regime focus on the "halo occupation distribution," a statistical description of the galaxy populations of gravitationally bound "halos" of dark matter. Depending on its mass, an individual dark halo may play host to a single bright galaxy, a small group of galaxies, or a rich cluster. By combining theoretical predictions for the masses and clustering of halos with precise measurements of the clustering of galaxies, one can infer the halo occupation distribution for different classes of galaxies empirically. Theoretical models of galaxy formation predict a strong dependence of halo occupation on galaxy luminosity and color, and the initial results from the 2dFGRS and the SDSS show good qualitative agreement with these predictions (8, 9). Increased precision and measurements for more galaxy classes will test the predictions in much greater detail, and they will sharpen our understanding of the physical processes that produce visible galaxies in the first place and determine their observable properties. By deriving the relation between galaxies and dark matter from the clustering data themselves, halo occupation methods also allow new cosmological model tests that take advantage of precise measurements on small and intermediate scales. The large-scale clustering results from the 2dFGRS and the SDSS, in combination with CMB measurements and other cosmological data, support the predictions of a simple inflation model in a universe that contains 5% normal matter, 25% dark matter, and 70% dark energy (10). However, several analyses that incorporate smaller scale clustering suggest that either the matter density or the matter clustering amplitude is lower than this "concordance" model predicts, by 30 to 50% (11-13). This tension could reflect systematic errors in the measurements or the modeling, but it could also signal some departure from the simplest models of primordial fluctuations or dark energy. For example, if inflation produces gravity waves that contribute to observed CMB fluctuations, then na?ve extrapolation of these fluctuations would overpredict the level of matter clustering today. Alternatively, evolution of the dark energy component can affect the amount of growth since the CMB epoch. As the SDSS moves toward completion, improved clustering measurements and analyses may restore the consensus on a "vanilla" cosmological model, or they may provide sharper evidence that our theoretical recipe for the universe is still missing a key ingredient. References and Notes 1. M. Colless et al., Mon. Not. R. Astron. Soc. 328, 1039 (2001). [ADS] 2. D. G. York et al., Astron. J.120, 1579 (2000). [ADS] 3. M. Tegmark et al., Astrophys. J. 606, 702 (2004). [ADS] 4. S. Cole et al., http://arxiv.org/abs/astro-ph/0501174. 5. D. J. Eisenstein et al., http://arxiv.org/abs/astro-ph/ 0501171. 6. E. Sheldon et al., Astron. J.127, 2544 (2004). [ADS] 7. L. Verde et al., Mon. Not. R. Astron. Soc. 335, 432 (2002). [ADS] 8. F. C. van den Bosch, X. Yang, H. J. Mo, Mon. Not. R. Astron. Soc. 340, 771 (2003). [ADS] 9. I. Zehavi et al., http://arxiv.org/abs/astro-ph/0408569. 10. Convergence on this model from several independent lines of evidence was the 2003 Science "Breakthrough of the Year," C. Seife, Science 302, 2038 (2003). 11. F. C. van den Bosch, H. J. Mo, X. Yang, Mon. Not. R. Astron. Soc. 345, 923 (2003). [ADS] 12. N. A. Bahcall et al., Astrophys. J. 585, 182 (2003). [ADS] 13. J. L. Tinker, D. H. Weinberg, Z. Zheng, I. Zehavi, Astrophys. J., in press; preprint available at http://arxiv.org/abs/astro-ph/0411777. 14. I thank the National Science Foundation for support. The author is in the Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210, USA E-mail: dhw at astronomy.ohio-state.edu From checker at panix.com Fri Jul 22 19:08:23 2005 From: checker at panix.com (Premise Checker) Date: Fri, 22 Jul 2005 15:08:23 -0400 (EDT) Subject: [Paleopsych] NYT Op-Ed: Why Do They Hate Us? (with a meme from me) Message-ID: Why Do They Hate Us? Not Because of Iraq New York Times Op-Ed, 5.7.22 http://www.nytimes.com/2005/07/22/opinion/22roy.html [I append the meme I sent little more than a month after the 9/11 attacks on Scruppies (scripture-pounding yuppies). I see little reason to revise what I wrote then. Envy remains a powerful force in the world.] By OLIVIER ROY Paris WHILE yesterday's explosions on London's subway and bus lines were thankfully far less serious than those of two weeks ago, they will lead many to raise a troubling question: has Britain (and Spain as well) been "punished" by Al Qaeda for participating in the American-led military interventions in Iraq and Afghanistan? While this is a reasonable line of thinking, it presupposes the answer to a broader and more pertinent question: Are the roots of Islamic terrorism in the Middle Eastern conflicts? If the answer is yes, the solution is simple to formulate, although not to achieve: leave Afghanistan and Iraq, solve the Israel-Palestine conflict. But if the answer is no, as I suspect it is, we should look deeper into the radicalization of young, Westernized Muslims. Conflicts in the Middle East have a tremendous impact on Muslim public opinion worldwide. In justifying its terrorist attacks by referring to Iraq, Al Qaeda is looking for popularity or at least legitimacy among Muslims. But many of the terrorist group's statements, actions and non-actions indicate that this is largely propaganda, and that Iraq, Afghanistan and Palestine are hardly the motivating factors behind its global jihad. First, let's consider the chronology. The Americans went to Iraq and Afghanistan after 9/11, not before. Mohamed Atta and the other pilots were not driven by Iraq or Afghanistan. Were they then driven by the plight of the Palestinians? It seems unlikely. After all, the attack was plotted well before the second intifada began in September 2000, at a time of relative optimism in Israeli-Palestinian negotiations. Another motivating factor, we are told, was the presence of "infidel" troops in Islam's holy lands. Yes, Osama Bin Laden was reported to be upset when the Saudi royal family allowed Western troops into the kingdom before the Persian Gulf war. But Mr. bin Laden was by that time a veteran fighter committed to global jihad. He and the other members of the first generation of Al Qaeda left the Middle East to fight the Soviet Union in Afghanistan in the 1980's. Except for the smallish Egyptian faction led by Ayman al-Zawahiri, now Mr. bin Laden's chief deputy, these militants were not involved in Middle Eastern politics. Abdullah Azzam, Mr. bin Laden's mentor, gave up supporting the Palestinian Liberation Organization long before his death in 1989 because he felt that to fight for a localized political cause was to forsake the real jihad, which he felt should be international and religious in character. From the beginning, Al Qaeda's fighters were global jihadists, and their favored battlegrounds have been outside the Middle East: Afghanistan, Bosnia, Chechnya and Kashmir. For them, every conflict is simply a part of the Western encroachment on the Muslim ummah, the worldwide community of believers. Second, if the conflicts in Afghanistan, Iraq and Palestine are at the core of the radicalization, why are there virtually no Afghans, Iraqis or Palestinians among the terrorists? Rather, the bombers are mostly from the Arabian Peninsula, North Africa, Egypt and Pakistan - or they are Western-born converts to Islam. Why would a Pakistani or a Spaniard be more angry than an Afghan about American troops in Afghanistan? It is precisely because they do not care about Afghanistan as such, but see the United States involvement there as part of a global phenomenon of cultural domination. What was true for the first generation of Al Qaeda is also relevant for the present generation: even if these young men are from Middle Eastern or South Asian families, they are for the most part Westernized Muslims living or even born in Europe who turn to radical Islam. Moreover, converts are to be found in almost every Qaeda cell: they did not turn fundamentalist because of Iraq, but because they felt excluded from Western society (this is especially true of the many converts from the Caribbean islands, both in Britain and France). "Born again" or converts, they are rebels looking for a cause. They find it in the dream of a virtual, universal ummah, the same way the ultraleftists of the 1970's (the Baader-Meinhof Gang, the Italian Red Brigades) cast their terrorist actions in the name of the "world proletariat" and "Revolution" without really caring about what would happen after. It is also interesting to note that none of the Islamic terrorists captured so far had been active in any legitimate antiwar movements or even in organized political support for the people they claim to be fighting for. They don't distribute leaflets or collect money for hospitals and schools. They do not have a rational strategy to push for the interests of the Iraqi or Palestinian people. Even their calls for the withdrawal of the European troops from Iraq ring false. After all, the Spanish police have foiled terrorist attempts in Madrid even since the government withdrew its forces. Western-based radicals strike where they are living, not where they are instructed to or where it will have the greatest political effect on behalf of their nominal causes. The Western-based Islamic terrorists are not the militant vanguard of the Muslim community; they are a lost generation, unmoored from traditional societies and cultures, frustrated by a Western society that does not meet their expectations. And their vision of a global ummah is both a mirror of and a form of revenge against the globalization that has made them what they are. Olivier Roy, a professor at the School for Advanced Studies in the Social Sciences, is the author of "Globalized Islam." --------------------- Meme 019: SCRUPPIES: Scripture-Pounding Yuppies 1.10.24 Fundamentalism is characteristic, not of aging conservatives so much as young, urbanizing populations undergoing great change. Fundamentalism eases their road to modernization. What they do is find a high-price faith that demands strict adherence and commitment and then go through their scripture and carefully select passages that emphasize clean living, strict obedience, and *making money*. They then insist on taking these passages infallibly and literally (and ignoring the rest). They are scripture-pounding Yuppies, and I call the Scruppies. This is described, in the case of the Moslems, in Samuel Huntington's _Clash of Civilizations_, where he notes that fundamentalist beliefs are highest in medical and professional schools. Scruppies are also characteristic of Louis Farrakhan's Nation of Islam (http://www.finalcall.com), and it was true of the first Protestant Ethic described by Max Weber in 1904/5. The terrorists are *failed* Scruppies. Get too many failed males of the high testosterone years, fill *them* up with fundamentalism and they emphasize, not the money- making elements but the strict adherence. Huntington shows how the fine correlation between peak of 15-24 year old youth bulge in Moslem countries and fundamentalist take-overs. [I am sending forth these memes, not because I agree wholeheartedly with all of them, but to impregnate females of both sexes. Ponder them and spread them.] From checker at panix.com Fri Jul 22 19:12:14 2005 From: checker at panix.com (Premise Checker) Date: Fri, 22 Jul 2005 15:12:14 -0400 (EDT) Subject: [Paleopsych] NYT: Among Dissident Union Leaders, the Backgrounds May Vary but the Vision Is the Same Message-ID: Among Dissident Union Leaders, the Backgrounds May Vary but the Vision Is the Same http://www.nytimes.com/2005/07/22/national/22labor.html?pagewanted=print [This is an important article. It's about the further decline of Big Labor, really. The percent of the U.S. labor force that is unionized is the same now as it was a century ago. But today, the majority of union workers are in government service at one level or another, the most influential being the teachers' unions. [When John Kerry was campaigning for the presidency, he said he'd never cross a picket line, an attempt to win back the Reagan Democrats. Today the Democratic Party consists largely of the New Class and minorities. [It's also instructive to realize that the same percentage of workers, 20% iirc, is in manufacturing today as it was in 1820, also iirc. Manufacturing workers exceeded agricultural workers soon after the War Between the States, and service workers exceeded manufacturing workers in 1957.] By [3]STEVEN GREENHOUSE The band of union presidents who are threatening to create the biggest schism in organized labor in 70 years are a varied lot: a former meat cutter, a former social worker, the son of the last century's most controversial labor leader and the organizer who led the heralded unionization campaign at J. P. Stevens. Whatever their differences, they agree on a fundamental point: that the A.F.L.-C.I.O. has utterly failed to reverse labor's slide even as workers struggle to cope with stagnant wages and shrinking benefits. They argue that the federation needs to embrace far-reaching changes to save organized labor from oblivion. Failing that, the leaders of the dissident unions - which represent more than one-third of the federation's 13 million members - are warning they will secede from the federation. These leaders - from the Teamsters, service employees, food and commercial workers and Unite Here - just might carry out their threat next week at the A.F.L.-C.I.O.'s convention in Chicago. "The way things are going, there's a real concern about whether there's even going to be a labor movement in a decade or two," said Joe Hansen, who became president of the United Food and Commercial Workers Union 16 months ago. "We need to create a structure and vehicle that's really going to advance the cause of not just our members, but all of the nation's workers." He has threatened leaving the federation along with James P. Hoffa, the Teamsters' president whose famous - some say notorious - father headed that union four decades ago; Bruce Raynor, president of Unite Here, whose role as lead organizer in the campaign against J. P. Stevens in the 1970's became a central part of the movie "Norma Rae," and Andrew L. Stern, a onetime social worker who heads the service employees. Their departure would be a major blow to organized labor. These dissidents represent three of the federation's four largest unions. The service employees have 1.8 million members, making it the biggest in the federation, while the Teamsters have 1.4 million members and the food and commercial workers, 1.3 million. "You either seize the opportunity to advance what you really believe in, or you let it go by and you become part of status quo," Mr. Hansen said. "We can't come out of Chicago with the status quo." Mr. Hansen, 61, a large man whose powerful hands bespeak his days as a meat cutter in Milwaukee, headed his union's packinghouse division before he inherited a union in disarray. It had been trounced in a four-and-a-half-month strike and lockout involving 58,000 supermarket workers in Southern California. Genial and pragmatic, Mr. Hansen has occasionally agreed to concessions to help save union jobs. After that trouncing, he helped stabilize the union's situation by successfully resisting demands for large-scale health care concessions in Seattle, Colorado and San Francisco. He is eager to turn around his union and sees the A.F.L.-C.I.O.'s current structure as inadequate to revive labor. John J. Sweeney, the A.F.L.-C.I.O.'s president, accuses the dissidents of betraying a core labor concept, solidarity, and he warns that their exodus would sabotage chances of a comeback for organized labor. A walkout would be a public relations disaster, many union leaders say, and hurt labor's clout in politics. "Breaking away isn't going to build a strong union movement," said Richard Trumka, the group's secretary-treasurer. "It will weaken the movement." Mr. Raynor disagreed. "We don't take this lightly," he said. "We care about labor unity, but we care much more about having a labor movement that is effective for American workers and their families." Mr. Raynor, 55, a Cornell graduate, dresses in expensive suits - finely crafted in union hands, keeping with a tradition for leaders in the apparel union. He made his name, at J. P. Stevens and elsewhere, as one of the shrewdest, toughest organizers in the anti-union South. Before union audiences, Mr. Raynor can be a brilliant speaker, but in confrontations with anti-union managers, he can be a bare-knuckled street fighter. During an organizing drive at Kmart, he spearheaded sit-ins at several stores. "We're in a crisis, and we need to change the way we do business," Mr. Raynor said. "The federation can't simply be an organization that is the lowest common denominator, that if any union objects to anything, then it doesn't happen. That's a voluntary trade organization. That's not what American workers need. They need a focused, strong labor federation." To that end, the dissidents have created a rival group, the Change to Win Coalition. They say the coalition will set requirements for each union on organizing and politics and will get its member unions to cooperate closely on giant unionization drives. "We have to stress organizing," Mr. Hoffa said. "The policies of the past 10 years haven't worked." Mr. Hoffa, 64, a graduate of the University of Michigan Law School, is not as charismatic, pugnacious or driven as was his father, James R. Hoffa, who disappeared in 1975. He did little to increase organizing after taking the Teamsters' helm in 1998 until the past two years. Mr. Stern, 54, the most charismatic of the dissidents, has grabbed most of the headlines. He was the first to threaten to quit the A.F.L.-C.I.O. and has helped persuade others to consider it. Known for his vision and impatience, he was the prot?g? of Mr. Sweeney while he headed the service employees' union. Mr. Stern has indicated that his union will probably quit, but whether the other unions follow depends on negotiations with Mr. Sweeney. They are demanding that he give the largest unions more power and agree to rebate half the federation's budget to individual unions to encourage organizing. They want the federation to push for mergers; to create larger, stronger unions; and they want strict standards requiring each union to do a specific amount of organizing. Terence O'Sullivan, the president of the laborers' union, shares the dissidents' harsh critique of the A.F.L.-C.I.O. but rules out quitting. "We're celebrating the 50th anniversary of the A.F.L. with the C.I.O.," Mr. O'Sullivan said, "but the A.F.L.-C.I.O. actually has less members today than 50 years ago, even though the nation's work force has more than doubled since then." Mr. O'Sullivan has won praise for getting his union, once one of the most corrupt, to focus far more on recruiting. He also developed a reputation for business acumen after he was named chairman of Ullico, a union-owned insurance company. It was losing millions of dollars a year, and he nursed it back to health. Mr. O'Sullivan is seen as a possible successor to Mr. Sweeney, and has taken pains not to alienate Mr. Sweeney's backers. He sees himself as a bridge-builder who might prevent a schism, or if there is one, might seek to limit it to months, not years. Several dissident leaders have indicated that they may not quit the federation if Mr. Sweeney retires. Mr. Sweeney, 71, has headed the federation for a decade and seems poised to win a new four-year-term at next week's convention. One dissident, John W. Wilhelm, 60, the president of Unite Here's hotel and restaurant division, had discussed running against Mr. Sweeney, but dropped the idea when he was unable to muster enough support. Mr. Sweeney insists that the differences between the two sides are too small to warrant a walkout. "If ever there was a time for the union movement to be united, this is it - with working people under the biggest assault in 80 years," he said. "Disunity just plays into the hands of the worst enemies of workers." Insisting that he has long focused on organizing efforts, Mr. Sweeney says he will spend more on it and will encourage union mergers. But he has balked at the dissidents' demand to rebate half the federation's budget, saying that it would cripple the A.F.L.-C.I.O. He said the lion's share of organizing money should come from individual unions. Mr. Sweeney plans to raise political spending to $30 million a year, from $7.5 million. That move has angered Mr. Hoffa. "There really is a debate about whether we're going to focus on growth or throw more money at politicians," he said. From checker at panix.com Fri Jul 22 19:12:45 2005 From: checker at panix.com (Premise Checker) Date: Fri, 22 Jul 2005 15:12:45 -0400 (EDT) Subject: [Paleopsych] Kenan Malik: Is this the future we really want? Different drugs for different races Message-ID: Kenan Malik: Is this the future we really want? Different drugs for different races The Times Online guest contributors Opinion http://www.timesonline.co.uk/article/0,,1072-1658766,00.html 5.6.18 A US GOVERNMENT advisory panel recommended this week that a drug which helps to treat congestive heart failure should be granted a licence. Its decision is controversial because BiDiL will be the first racially-targeted drug. When tested on the general population it proved ineffective, but when given to African-Americans, to whom it will be marketed, it appeared to cut death rates from heart failure by 43 per cent. The BiDiL debate gets to the heart of one of the most explosive issues in medicine. Does race matter in medicine? Or should it be colour-blind? The New England Journal of Medicine has argued that "race is biologically meaningless" and that doctors should be taught about "the dangers inherent in practising race-based medicine." Others disagree. The psychiatrist, Sally Satel, believes that in medicine "stereotyping often works". In her Washington drug clinic, Satel prescribes different amounts of Prozac to black and white patients because, she says, the two groups seem to meta bolise antidepressants at different rates. So who is right? As with much else in debates about race, the answer is both sides and neither. Different populations do show different patterns of disease and disorder. Northern Europeans, for instance, are more likely to suffer from cystic fibrosis than other groups. Tay-Sachs, a fatal disease of the central nervous system, particularly affects Ashkenazi Jews. Beta-blockers appear to work less effectively for African-Americans than those of European descent. Yet race is not necessarily a good guide to disease. We all think we know that sickle-cell anaemia is a black disease. Except that it is not. Sickle cell is a disease of populations originating from areas with a high incidence of malaria. Some of these populations are black, some are not. The sickle-cell gene is found in equatorial Africa, parts of southern Europe, southern Turkey, parts of the Middle East and much of central India. Most people, however, only know that African-Americans suffer disproportionately from the trait. And, given popular ideas about race, they automatically assume that what applies to black Americans also applies to all blacks and only to blacks. It is the social imagination, not the biological reality, of race that turns sickle cell into a black disease. Genetic studies show that human beings comprise a relatively homogenous species and that most of our genetic variation is at individual, not group, level. Imagine that a nuclear explosion wiped out the human race apart from one small population -- say, the Masai tribe in East Africa. Virtually all the genetic variation that exists in the world today would still be present in that one small group. About 85 per cent of human variation occurs between individuals within local populations. A further 10 per cent or so differentiates populations within a race. Only about 5 per cent of total variation distinguishes the major races. This is why many scientists reject the idea of race. Since most variation exists at the individual level, doctors ideally would like to map every individual's genome to be able to predict better his potential medical problems and responses to different drugs. Such individual genotyping is currently both impracticable and too costly, so doctors often resort to using surrogate indicators of an individual's risk profile -- such as race. Until recently people were more likely to marry a neighbour than someone who hailed from distant lands. As a result the farther apart two populations are geographically, the more distinct they are likely to be genetically. Icelanders are genetically different from Greeks, but they are genetically closer to Greeks than they are to Nigerians. The difference is tiny, but it can have a medical impact. Knowing the population from which your ancestors came can provide hints as to what genes you may be carrying. Hence race, which Satel suggests, is a "poor man's clue" in medicine. But a poor man's clue may be about as reliable as an intelligence dossier. First, there are no hard and fast divisions between populations. Every population runs into another and no gene is unique to one. Cystic fibrosis may be more common among northern Europeans but is not confined to them. One of the dangers of marketing BiDiL as a black drug is that it may be given to African-Americans who don't respond to it, but denied to non-blacks who could. Secondly, different genes are distributed differently among populations. The pattern of distribution of genes for cystic fibrosis is not the same as that of sickle-cell genes. Which population differences are important varies from one disease to another. Finally, many medical differences associated with race are likely to be the result of environmental rather than genetic differences, or a combination of the two. In the case of response to BiDiL, no one knows which is more important. All this suggests that the question of whether medicine should be colourblind depends on the particular problem we want to address. It is a pragmatic issue, not one rooted in scientific or political principle. Race, however, is such a contentious issue that pragmatism rarely enters the debate. On one side, so-called race realists think that population differences are so important that all medicine should be colour-coded. On the other, many antiracists want to ban race-based research entirely for fear of its social consequences. Both are wrong. It is time everyone calmed down and took a grown-up view of the issue. Kenan Malik is author of Man, Beast and Zombie: What Science Can and Cannot Tell Us about Human Nature From checker at panix.com Fri Jul 22 19:12:56 2005 From: checker at panix.com (Premise Checker) Date: Fri, 22 Jul 2005 15:12:56 -0400 (EDT) Subject: [Paleopsych] NS: Details of US microwave-weapon tests revealed Message-ID: Details of US microwave-weapon tests revealed http://www.newscientist.com/article.ns?id=mg18725095.600&print=true [Thanks to Laird for this.] * 22 July 2005 * David Hambling VOLUNTEERS taking part in tests of the Pentagon's "less-lethal" microwave weapon were banned from wearing glasses or contact lenses due to safety fears. The precautions raise concerns about how safe the Active Denial System (ADS) weapon would be if used in real crowd-control situations. The ADS fires a 95-gigahertz microwave beam, which is supposed to heat skin and to cause pain but no physical damage (New Scientist, 27 October 2001, p 26). Little information about its effects has been released, but details of tests in 2003 and 2004 were revealed after Edward Hammond, director of the US Sunshine Project - an organisation campaigning against the use of biological and non-lethal weapons - requested them under the Freedom of Information Act. The tests were carried out at Kirtland Air Force Base in Albuquerque, New Mexico. Two experiments tested pain tolerance levels, while in a third, a "limited military utility assessment", volunteers played the part of rioters or intruders and the ADS was used to drive them away. The experimenters banned glasses and contact lenses to prevent possible eye damage to the subjects, and in the second and third tests removed any metallic objects such as coins and keys to stop hot spots being created on the skin. They also checked the volunteers' clothes for certain seams, buttons and zips which might also cause hot spots. The ADS weapon's beam causes pain within 2 to 3 seconds and it becomes intolerable after less than 5 seconds. People's reflex responses to the pain is expected to force them to move out of the beam before their skin can be burnt. But Neil Davison, co-ordinator of the non-lethal weapons research project at the University of Bradford in the UK, says controlling the amount of radiation received may not be that simple. "How do you ensure that the dose doesn't cross the threshold for permanent damage?" he asks. "What happens if someone in a crowd is unable, for whatever reason, to move away from the beam? Does the weapon cut out to prevent overexposure?" During the experiments, people playing rioters put up their hands when hit and were given a 15-second cooling-down period before being targeted again. One person suffered a burn in a previous test when the beam was accidentally used on the wrong power setting. A vehicle-mounted version of ADS called Sheriff could be in service in Iraq in 2006 according to the Department of Defense, and it is also being evaluated by the US Department of Energy for use in defending nuclear facilities. The US marines and police are both working on portable versions, and the US air force is building a system for controlling riots from the air. Related Articles * [12]Police toy with 'less lethal' weapons * [13]http://www.newscientist.com/article.ns?id=mg18624975.800 * 30 April 2005 * [14]'Agent defeat weapons' ready for use * [15]http://www.newscientist.com/article.ns?id=dn3419 * 21 February 2003 * [16]Microwave beam weapon to disperse crowds * [17]http://www.newscientist.com/article.ns?id=dn1470 * 29 October 2001 Weblinks * [18]US Sunshine Project * [19]http://www.sunshine-project.org/ * [20]Bradford Non-Lethal Weapons Research Project * [21]http://www.brad.ac.uk/acad/nlw/ * [22]Kirtland Air Force Base * [23]http://www.kirtland.af.mil/ References 12. http://www.newscientist.com/article.ns?id=mg18624975.800 13. http://www.newscientist.com/article.ns?id=mg18624975.800 14. http://www.newscientist.com/article.ns?id=dn3419 15. http://www.newscientist.com/article.ns?id=dn3419 16. http://www.newscientist.com/article.ns?id=dn1470 17. http://www.newscientist.com/article.ns?id=dn1470 18. http://www.sunshine-project.org/ 19. http://www.sunshine-project.org/ 20. http://www.brad.ac.uk/acad/nlw/ 21. http://www.brad.ac.uk/acad/nlw/ 22. http://www.kirtland.af.mil/ 23. http://www.kirtland.af.mil/ E-mail me if you have problems getting the referenced articles. From ljohnson at solution-consulting.com Sat Jul 23 16:34:41 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Sat, 23 Jul 2005 10:34:41 -0600 Subject: [Paleopsych] Kenan Malik: Is this the future we really want? Different drugs for different races In-Reply-To: References: Message-ID: <42E271A1.8060206@solution-consulting.com> It seems like a good future to me. I am not going to do a complete exegesis on this, but offer one point. Malik's warning: "One of the dangers of marketing BiDiL as a black drug is that it may be given to African-Americans who don't respond to it, but denied to non-blacks who could." is an example of illogic. There is no medical rationale for giving the drug to non-African Americans since that was the only group with a stiatistically robust response. In the American Psychological Association the Received Wisdom is that there is no such thing as race, yet in medicine there are some differences. It is not biology that makes them argue this way, IMHO, but Political Correct Thinking. The idea behind PC talk is that we could change reality by changing our discourse - the constructivism POV. In my view, truth is quite approachable in this life, and we ought to be about it, and I am not convinced we can reframe all reality away. Contradictory thoughts are welcomed. Lynn Premise Checker wrote: > Kenan Malik: Is this the future we really want? Different drugs for > different races > The Times Online guest contributors Opinion > http://www.timesonline.co.uk/article/0,,1072-1658766,00.html > 5.6.18 > > A US GOVERNMENT advisory panel recommended this week that a drug > which > helps to treat congestive heart failure should be granted a licence. > Its decision is controversial because BiDiL will be the first > racially-targeted drug. When tested on the general population it > proved ineffective, but when given to African-Americans, to whom it > will be marketed, it appeared to cut death rates from heart > failure by > 43 per cent. > > The BiDiL debate gets to the heart of one of the most explosive > issues > in medicine. Does race matter in medicine? Or should it be > colour-blind? > > The New England Journal of Medicine has argued that "race is > biologically meaningless" and that doctors should be taught about > "the > dangers inherent in practising race-based medicine." Others disagree. > The psychiatrist, Sally Satel, believes that in medicine > "stereotyping > often works". In her Washington drug clinic, Satel prescribes > different amounts of Prozac to black and white patients because, she > says, the two groups seem to meta bolise antidepressants at different > rates. > > So who is right? As with much else in debates about race, the answer > is both sides and neither. Different populations do show different > patterns of disease and disorder. Northern Europeans, for instance, > are more likely to suffer from cystic fibrosis than other groups. > Tay-Sachs, a fatal disease of the central nervous system, > particularly > affects Ashkenazi Jews. Beta-blockers appear to work less effectively > for African-Americans than those of European descent. > > Yet race is not necessarily a good guide to disease. We all think we > know that sickle-cell anaemia is a black disease. Except that it is > not. Sickle cell is a disease of populations originating from areas > with a high incidence of malaria. Some of these populations are > black, > some are not. The sickle-cell gene is found in equatorial Africa, > parts of southern Europe, southern Turkey, parts of the Middle East > and much of central India. Most people, however, only know that > African-Americans suffer disproportionately from the trait. And, > given > popular ideas about race, they automatically assume that what applies > to black Americans also applies to all blacks and only to blacks. It > is the social imagination, not the biological reality, of race that > turns sickle cell into a black disease. > > Genetic studies show that human beings comprise a relatively > homogenous species and that most of our genetic variation is at > individual, not group, level. Imagine that a nuclear explosion wiped > out the human race apart from one small population -- say, the Masai > tribe in East Africa. Virtually all the genetic variation that exists > in the world today would still be present in that one small group. > About 85 per cent of human variation occurs between individuals > within > local populations. A further 10 per cent or so differentiates > populations within a race. Only about 5 per cent of total variation > distinguishes the major races. This is why many scientists reject the > idea of race. > > Since most variation exists at the individual level, doctors ideally > would like to map every individual's genome to be able to predict > better his potential medical problems and responses to different > drugs. Such individual genotyping is currently both impracticable and > too costly, so doctors often resort to using surrogate indicators of > an individual's risk profile -- such as race. > > Until recently people were more likely to marry a neighbour than > someone who hailed from distant lands. As a result the farther apart > two populations are geographically, the more distinct they are likely > to be genetically. Icelanders are genetically different from Greeks, > but they are genetically closer to Greeks than they are to Nigerians. > The difference is tiny, but it can have a medical impact. Knowing the > population from which your ancestors came can provide hints as to > what > genes you may be carrying. Hence race, which Satel suggests, is a > "poor man's clue" in medicine. > > But a poor man's clue may be about as reliable as an intelligence > dossier. First, there are no hard and fast divisions between > populations. Every population runs into another and no gene is unique > to one. Cystic fibrosis may be more common among northern Europeans > but is not confined to them. One of the dangers of marketing BiDiL as > a black drug is that it may be given to African-Americans who don't > respond to it, but denied to non-blacks who could. Secondly, > different > genes are distributed differently among populations. The pattern of > distribution of genes for cystic fibrosis is not the same as that of > sickle-cell genes. Which population differences are important varies > from one disease to another. Finally, many medical differences > associated with race are likely to be the result of environmental > rather than genetic differences, or a combination of the two. In the > case of response to BiDiL, no one knows which is more important. > > All this suggests that the question of whether medicine should be > colourblind depends on the particular problem we want to address. It > is a pragmatic issue, not one rooted in scientific or political > principle. Race, however, is such a contentious issue that pragmatism > rarely enters the debate. On one side, so-called race realists think > that population differences are so important that all medicine should > be colour-coded. On the other, many antiracists want to ban > race-based > research entirely for fear of its social consequences. Both are > wrong. > It is time everyone calmed down and took a grown-up view of the > issue. > > Kenan Malik is author of Man, Beast and Zombie: What Science Can and > Cannot Tell Us about Human Nature > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > > From shovland at mindspring.com Sat Jul 23 18:21:20 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sat, 23 Jul 2005 20:21:20 +0200 (GMT+02:00) Subject: [Paleopsych] Kenan Malik: Is this the future we really want? Different drugs for different races Message-ID: <33247264.1122142881202.JavaMail.root@wamui-blood.atl.sa.earthlink.net> There are also sex differences and possibly age differences. The medicine of the future will be custom medicine, tailored to the individual on the basis of careful testing, rather than the shotgun approach we use now. -----Original Message----- From: "Lynn D. Johnson, Ph.D." Sent: Jul 23, 2005 6:34 PM To: The new improved paleopsych list Subject: Re: [Paleopsych] Kenan Malik: Is this the future we really want? Different drugs for different races It seems like a good future to me. I am not going to do a complete exegesis on this, but offer one point. Malik's warning: "One of the dangers of marketing BiDiL as a black drug is that it may be given to African-Americans who don't respond to it, but denied to non-blacks who could." is an example of illogic. There is no medical rationale for giving the drug to non-African Americans since that was the only group with a stiatistically robust response. In the American Psychological Association the Received Wisdom is that there is no such thing as race, yet in medicine there are some differences. It is not biology that makes them argue this way, IMHO, but Political Correct Thinking. The idea behind PC talk is that we could change reality by changing our discourse - the constructivism POV. In my view, truth is quite approachable in this life, and we ought to be about it, and I am not convinced we can reframe all reality away. Contradictory thoughts are welcomed. Lynn Premise Checker wrote: > Kenan Malik: Is this the future we really want? Different drugs for > different races > The Times Online guest contributors Opinion > http://www.timesonline.co.uk/article/0,,1072-1658766,00.html > 5.6.18 > > A US GOVERNMENT advisory panel recommended this week that a drug > which > helps to treat congestive heart failure should be granted a licence. > Its decision is controversial because BiDiL will be the first > racially-targeted drug. When tested on the general population it > proved ineffective, but when given to African-Americans, to whom it > will be marketed, it appeared to cut death rates from heart > failure by > 43 per cent. > > The BiDiL debate gets to the heart of one of the most explosive > issues > in medicine. Does race matter in medicine? Or should it be > colour-blind? > > The New England Journal of Medicine has argued that "race is > biologically meaningless" and that doctors should be taught about > "the > dangers inherent in practising race-based medicine." Others disagree. > The psychiatrist, Sally Satel, believes that in medicine > "stereotyping > often works". In her Washington drug clinic, Satel prescribes > different amounts of Prozac to black and white patients because, she > says, the two groups seem to meta bolise antidepressants at different > rates. > > So who is right? As with much else in debates about race, the answer > is both sides and neither. Different populations do show different > patterns of disease and disorder. Northern Europeans, for instance, > are more likely to suffer from cystic fibrosis than other groups. > Tay-Sachs, a fatal disease of the central nervous system, > particularly > affects Ashkenazi Jews. Beta-blockers appear to work less effectively > for African-Americans than those of European descent. > > Yet race is not necessarily a good guide to disease. We all think we > know that sickle-cell anaemia is a black disease. Except that it is > not. Sickle cell is a disease of populations originating from areas > with a high incidence of malaria. Some of these populations are > black, > some are not. The sickle-cell gene is found in equatorial Africa, > parts of southern Europe, southern Turkey, parts of the Middle East > and much of central India. Most people, however, only know that > African-Americans suffer disproportionately from the trait. And, > given > popular ideas about race, they automatically assume that what applies > to black Americans also applies to all blacks and only to blacks. It > is the social imagination, not the biological reality, of race that > turns sickle cell into a black disease. > > Genetic studies show that human beings comprise a relatively > homogenous species and that most of our genetic variation is at > individual, not group, level. Imagine that a nuclear explosion wiped > out the human race apart from one small population -- say, the Masai > tribe in East Africa. Virtually all the genetic variation that exists > in the world today would still be present in that one small group. > About 85 per cent of human variation occurs between individuals > within > local populations. A further 10 per cent or so differentiates > populations within a race. Only about 5 per cent of total variation > distinguishes the major races. This is why many scientists reject the > idea of race. > > Since most variation exists at the individual level, doctors ideally > would like to map every individual's genome to be able to predict > better his potential medical problems and responses to different > drugs. Such individual genotyping is currently both impracticable and > too costly, so doctors often resort to using surrogate indicators of > an individual's risk profile -- such as race. > > Until recently people were more likely to marry a neighbour than > someone who hailed from distant lands. As a result the farther apart > two populations are geographically, the more distinct they are likely > to be genetically. Icelanders are genetically different from Greeks, > but they are genetically closer to Greeks than they are to Nigerians. > The difference is tiny, but it can have a medical impact. Knowing the > population from which your ancestors came can provide hints as to > what > genes you may be carrying. Hence race, which Satel suggests, is a > "poor man's clue" in medicine. > > But a poor man's clue may be about as reliable as an intelligence > dossier. First, there are no hard and fast divisions between > populations. Every population runs into another and no gene is unique > to one. Cystic fibrosis may be more common among northern Europeans > but is not confined to them. One of the dangers of marketing BiDiL as > a black drug is that it may be given to African-Americans who don't > respond to it, but denied to non-blacks who could. Secondly, > different > genes are distributed differently among populations. The pattern of > distribution of genes for cystic fibrosis is not the same as that of > sickle-cell genes. Which population differences are important varies > from one disease to another. Finally, many medical differences > associated with race are likely to be the result of environmental > rather than genetic differences, or a combination of the two. In the > case of response to BiDiL, no one knows which is more important. > > All this suggests that the question of whether medicine should be > colourblind depends on the particular problem we want to address. It > is a pragmatic issue, not one rooted in scientific or political > principle. Race, however, is such a contentious issue that pragmatism > rarely enters the debate. On one side, so-called race realists think > that population differences are so important that all medicine should > be colour-coded. On the other, many antiracists want to ban > race-based > research entirely for fear of its social consequences. Both are > wrong. > It is time everyone calmed down and took a grown-up view of the > issue. > > Kenan Malik is author of Man, Beast and Zombie: What Science Can and > Cannot Tell Us about Human Nature > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > > _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From waluk at earthlink.net Sat Jul 23 19:39:45 2005 From: waluk at earthlink.net (Gerry) Date: Sat, 23 Jul 2005 12:39:45 -0700 Subject: [Paleopsych] Kenan Malik: Is this the future we really want? Different drugs for different races In-Reply-To: <33247264.1122142881202.JavaMail.root@wamui-blood.atl.sa.earthlink.net> References: <33247264.1122142881202.JavaMail.root@wamui-blood.atl.sa.earthlink.net> Message-ID: <42E29D01.3090209@earthlink.net> Lynn writes: >>In the American Psychological Association the Received Wisdom is that there is no such thing as race, yet in medicine there are some differences. It is not biology that makes them argue this way, IMHO, but Political Correct Thinking.>> Steve says: >>The medicine of the future will be custom medicine, tailored to the individual on the basis of careful testing, rather than the shotgun approach we use now.>> Custom medicine tailored to our physical and psychological well being is the ideal for which we need to strive. Yet in the meantime, in the here and now, we need to address the medical needs of groups rather than individuals. Medical studies have been able to profile certain ethnic groups and their propensity for certain diseases. This doesn't mean that everyone in a particular group, say African American, will contract sickle cell anemia but it's a good place to begin. If the APA wishes to deny the term "race" then let them use "ethnic group". Either way, disease clusters in families and ethnic groups. Eliminating the term "race" in no way eradicates disease. Gerry Reinhart-Waller From ljohnson at solution-consulting.com Sat Jul 23 21:43:42 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Sat, 23 Jul 2005 15:43:42 -0600 Subject: [Paleopsych] Kenan Malik: Is this the future we really want? Different drugs for different races In-Reply-To: <33247264.1122142881202.JavaMail.root@wamui-blood.atl.sa.earthlink.net> References: <33247264.1122142881202.JavaMail.root@wamui-blood.atl.sa.earthlink.net> Message-ID: <42E2BA0E.9030007@solution-consulting.com> There are clearly age differences. As we age, the gut is less able to absorb, yet the liver clears things more slowly, so giving psychotropic drugs to the elderly is a real guessing game. Sex differences have been ignored, much to the detriment of women. The fiction was that men and women are of the same species. Obviously an error. Lynn shovland at mindspring.com wrote: >There are also sex differences and possibly age differences. > >The medicine of the future will be custom medicine, tailored >to the individual on the basis of careful testing, rather than >the shotgun approach we use now. > > > >-----Original Message----- >From: "Lynn D. Johnson, Ph.D." >Sent: Jul 23, 2005 6:34 PM >To: The new improved paleopsych list >Subject: Re: [Paleopsych] Kenan Malik: Is this the future we really want? Different drugs for different races > >It seems like a good future to me. I am not going to do a complete >exegesis on this, but offer one point. >Malik's warning: "One of the dangers of marketing BiDiL as > a black drug is that it may be given to African-Americans who don't > respond to it, but denied to non-blacks who could." is an example >of illogic. There is no medical rationale for giving the drug to >non-African Americans since that was the only group with a >stiatistically robust response. In the American Psychological >Association the Received Wisdom is that there is no such thing as race, >yet in medicine there are some differences. It is not biology that makes >them argue this way, IMHO, but Political Correct Thinking. The idea >behind PC talk is that we could change reality by changing our discourse >- the constructivism POV. In my view, truth is quite approachable in >this life, and we ought to be about it, and I am not convinced we can >reframe all reality away. > Contradictory thoughts are welcomed. >Lynn > >Premise Checker wrote: > > > >>Kenan Malik: Is this the future we really want? Different drugs for >>different races >>The Times Online guest contributors Opinion >>http://www.timesonline.co.uk/article/0,,1072-1658766,00.html >>5.6.18 >> >> A US GOVERNMENT advisory panel recommended this week that a drug >>which >> helps to treat congestive heart failure should be granted a licence. >> Its decision is controversial because BiDiL will be the first >> racially-targeted drug. When tested on the general population it >> proved ineffective, but when given to African-Americans, to whom it >> will be marketed, it appeared to cut death rates from heart >>failure by >> 43 per cent. >> >> The BiDiL debate gets to the heart of one of the most explosive >>issues >> in medicine. Does race matter in medicine? Or should it be >> colour-blind? >> >> The New England Journal of Medicine has argued that "race is >> biologically meaningless" and that doctors should be taught about >>"the >> dangers inherent in practising race-based medicine." Others disagree. >> The psychiatrist, Sally Satel, believes that in medicine >>"stereotyping >> often works". In her Washington drug clinic, Satel prescribes >> different amounts of Prozac to black and white patients because, she >> says, the two groups seem to meta bolise antidepressants at different >> rates. >> >> So who is right? As with much else in debates about race, the answer >> is both sides and neither. Different populations do show different >> patterns of disease and disorder. Northern Europeans, for instance, >> are more likely to suffer from cystic fibrosis than other groups. >> Tay-Sachs, a fatal disease of the central nervous system, >>particularly >> affects Ashkenazi Jews. Beta-blockers appear to work less effectively >> for African-Americans than those of European descent. >> >> Yet race is not necessarily a good guide to disease. We all think we >> know that sickle-cell anaemia is a black disease. Except that it is >> not. Sickle cell is a disease of populations originating from areas >> with a high incidence of malaria. Some of these populations are >>black, >> some are not. The sickle-cell gene is found in equatorial Africa, >> parts of southern Europe, southern Turkey, parts of the Middle East >> and much of central India. Most people, however, only know that >> African-Americans suffer disproportionately from the trait. And, >>given >> popular ideas about race, they automatically assume that what applies >> to black Americans also applies to all blacks and only to blacks. It >> is the social imagination, not the biological reality, of race that >> turns sickle cell into a black disease. >> >> Genetic studies show that human beings comprise a relatively >> homogenous species and that most of our genetic variation is at >> individual, not group, level. Imagine that a nuclear explosion wiped >> out the human race apart from one small population -- say, the Masai >> tribe in East Africa. Virtually all the genetic variation that exists >> in the world today would still be present in that one small group. >> About 85 per cent of human variation occurs between individuals >>within >> local populations. A further 10 per cent or so differentiates >> populations within a race. Only about 5 per cent of total variation >> distinguishes the major races. This is why many scientists reject the >> idea of race. >> >> Since most variation exists at the individual level, doctors ideally >> would like to map every individual's genome to be able to predict >> better his potential medical problems and responses to different >> drugs. Such individual genotyping is currently both impracticable and >> too costly, so doctors often resort to using surrogate indicators of >> an individual's risk profile -- such as race. >> >> Until recently people were more likely to marry a neighbour than >> someone who hailed from distant lands. As a result the farther apart >> two populations are geographically, the more distinct they are likely >> to be genetically. Icelanders are genetically different from Greeks, >> but they are genetically closer to Greeks than they are to Nigerians. >> The difference is tiny, but it can have a medical impact. Knowing the >> population from which your ancestors came can provide hints as to >>what >> genes you may be carrying. Hence race, which Satel suggests, is a >> "poor man's clue" in medicine. >> >> But a poor man's clue may be about as reliable as an intelligence >> dossier. First, there are no hard and fast divisions between >> populations. Every population runs into another and no gene is unique >> to one. Cystic fibrosis may be more common among northern Europeans >> but is not confined to them. One of the dangers of marketing BiDiL as >> a black drug is that it may be given to African-Americans who don't >> respond to it, but denied to non-blacks who could. Secondly, >>different >> genes are distributed differently among populations. The pattern of >> distribution of genes for cystic fibrosis is not the same as that of >> sickle-cell genes. Which population differences are important varies >> from one disease to another. Finally, many medical differences >> associated with race are likely to be the result of environmental >> rather than genetic differences, or a combination of the two. In the >> case of response to BiDiL, no one knows which is more important. >> >> All this suggests that the question of whether medicine should be >> colourblind depends on the particular problem we want to address. It >> is a pragmatic issue, not one rooted in scientific or political >> principle. Race, however, is such a contentious issue that pragmatism >> rarely enters the debate. On one side, so-called race realists think >> that population differences are so important that all medicine should >> be colour-coded. On the other, many antiracists want to ban >>race-based >> research entirely for fear of its social consequences. Both are >>wrong. >> It is time everyone calmed down and took a grown-up view of the >>issue. >> >> Kenan Malik is author of Man, Beast and Zombie: What Science Can and >> Cannot Tell Us about Human Nature >>_______________________________________________ >>paleopsych mailing list >>paleopsych at paleopsych.org >>http://lists.paleopsych.org/mailman/listinfo/paleopsych >> >> >> >> > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Sun Jul 24 06:25:04 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sun, 24 Jul 2005 08:25:04 +0200 (GMT+02:00) Subject: [Paleopsych] Kenan Malik: Is this the future we really want? Different drugs for different races Message-ID: <33420372.1122186304312.JavaMail.root@wamui-blood.atl.sa.earthlink.net> Some idea of the potential: Years ago I heard that we can examine urine for as many as 150 factors. Neural net software can be used to integrate all of that information. (A human can only integrate 7-9 variables.) My preventive medicine doctor recently informed me of a new test that checks vitamin levels inside the cells. Steve(from Budapest) -----Original Message----- From: Gerry Sent: Jul 23, 2005 9:39 PM To: shovland at mindspring.com, The new improved paleopsych list , "Lynn D. Johnson, Ph.D." Subject: Re: [Paleopsych] Kenan Malik: Is this the future we really want? Different drugs for different races Lynn writes: >>In the American Psychological Association the Received Wisdom is that there is no such thing as race, yet in medicine there are some differences. It is not biology that makes them argue this way, IMHO, but Political Correct Thinking.>> Steve says: >>The medicine of the future will be custom medicine, tailored to the individual on the basis of careful testing, rather than the shotgun approach we use now.>> Custom medicine tailored to our physical and psychological well being is the ideal for which we need to strive. Yet in the meantime, in the here and now, we need to address the medical needs of groups rather than individuals. Medical studies have been able to profile certain ethnic groups and their propensity for certain diseases. This doesn't mean that everyone in a particular group, say African American, will contract sickle cell anemia but it's a good place to begin. If the APA wishes to deny the term "race" then let them use "ethnic group". Either way, disease clusters in families and ethnic groups. Eliminating the term "race" in no way eradicates disease. Gerry Reinhart-Waller From checker at panix.com Sun Jul 24 15:00:48 2005 From: checker at panix.com (Premise Checker) Date: Sun, 24 Jul 2005 11:00:48 -0400 (EDT) Subject: [Paleopsych] H-N: Thos. E. Dickins: A Necessary Pain in the Heart Message-ID: Thos. E. Dickins: A Necessary Pain in the Heart http://human-nature.com/ep/reviews/ep03175178.html 5.7.10 [Thanks to Laird for this.] Evolutionary Psychology 3: 175-178 Book Review A Necessary Pain in the Heart* A Review of Why We Lie: The Evolutionary Roots of Deception and the Unconscious Mind by David Livingstone Smith. New York: St Martin's Press. ISBN 0-312-31039, 2004. Thomas E. Dickins, School of Psychology, University of East London and Centre for Philosophy of Natural and Social Science, London School of Economics, London E15 4LZ, United Kingdom. * This is a line from Stevie Wonder's song Ordinary Pain, on his album Songs in the Key of Life, 1976. This song advocates a stringent functionalism about emotional responses. Six years ago, at the annual Human Behaviour and Evolution Society conference, I sat down to dinner with a group of fellow evolutionary behavioural scientists. Everyone was in high conference spirits and everyone at my table was male. Soon conversation moved from social gossip about fellow delegates to talking about relationships, and one of our number posed the question "does working in this field hinder your romantic relationships?" The table was divided, with half of the men claiming no influence whatsoever, for in those intimate circumstances their behaviours simply played out naturally. The other half saw knowledge about evolved mating behaviours as a hindrance to their interactions, for they often failed to seize the moment and instead went off-line and observed the interaction with a critical eye. I placed myself in this latter camp. At the time the conversation was an amusing conceit and I thought little more about it. But during the course of the following six years my personal life continued and, as it turned out, my relationship history unfolded somewhat unfortunately. When my wife and I separated I quite naturally tried to think about the situation from an evolutionary perspective and I asked myself whether I could conceptualize the failure in our relationship in terms of what I knew about mating decisions. Of course, I soon chastised myself for trying to jump from statements about human universals to an analysis of the fine-grained sequences of behaviour that constituted my marriage. Nonetheless, I had started down a particular road in my thinking. I did not understand the nature of the emotional pain I felt, but I recognised that it was patterned; I did not understand my motivations for saying certain things during the separation process, but I saw that they achieved certain specific effects. Surely, these things were not idiosyncratic to me and surely there must be a functional story to tell about this aspect of psychology? Whilst I reflected on this I remembered the conversation at the conference and realised that no one on the table had claimed that an evolutionary perspective could help a relationship; the only expressed options were no effect or hindrance. Perhaps, I rather grandly reasoned, an evolutionary account of the emotions felt around separation might form the foundation of a useful therapeutic tool. During discussions with a number of patient colleagues, one of them reminded me of Freud's ambitions. Freud had hoped to integrate an account of personal-level psychological machinations with contemporary neurological science. Freud, of course, failed in this attempt but his expression of the problem can only be seen as useful. Recently, Timothy D. Wilson (2002) has more formally resurrected Freud's project in his book Strangers to Ourselves. The subtitle of this book is Discovering the Adaptive Unconscious, which captures Wilson's thesis that much of our psychology is unconscious and adapted to solve specific problems. Our conscious, or personal-level processing is perhaps best seen as a calibrational tool, or set of tools, that finesses work done by the unconscious. In making this claim, Wilson brings the Freudian project into contact with modern evolutionary approaches. However, Wilson does not offer a detailed adaptationist analysis of our unconscious psychology and instead tantalisingly hints at a variety of possible functions that are served by such processes. David Livingstone Smith's book, on the other hand, sets out to achieve an adaptationist decomposition of at least one aspect of our unconscious psychology; that which delivers/underlies social manipulation. The first half of the book is an introduction to evolutionary psychology and to theories of deception and self-deception. It is from this half that the book gains its title. For those well versed in evolutionary approaches to the behavioural sciences this can be skipped: however, for those who are not, its light touch and pace will bring them rapidly to a point Smith's core thesis can be digested. It is as follows. We tell stories; or rather we construct narratives about much of what goes on in our lives. These narratives are for our own private consumption, to explain events as well as to shape and predict futures. Our stories find public uses too, for they act as communicative structures. However, most of our conversational machinations are not, in fact, under personal-level control, but instead are under the unconscious or sub-personal-level control of a social module. This module is a domain-specific device, in keeping with contemporary assumptions in Evolutionary Psychology, and it delivers (small-p) political insight, in keeping with the Machiavellian Intelligence Hypothesis, as well as more general social scanning. The key point is that this module renders us highly sensitive to other people and it influences our narration in such a way as to deliver unintended messages. At the personal-level we tell ourselves we are delivering message x, but our sub-personal-level cognition is in fact causing us to send message y. An example of such coded communication happened when I once entered a public house in the U.K. with a fellow academic. We were engaged in a debate about some aspect of cognition, vigorously disagreeing with each other while we found somewhere to sit. We eventually perched on the edge of a shared bench and continued arguing, seemingly oblivious to the world around us. At one point my antagonist, who had grown frustrated with my line of argument, declared that "your argument is about as useful as a one-armed man on a building site." Sitting next to him, further along the bench, was a one-armed man who was clearly manipulating pints, wallets, cash and handshakes in a different manner than most. No one had mentioned this man, and my protagonist claimed he had not even seen him; but, according to Smith, the likelihood is that my colleague had seen him, had registered his loss of an arm, and had had a series of thoughts about the consequences of such an injury. Such features are of importance to a social animal, and according to Smith, are the kind of thing we might comment on to the extent that even if we do not directly discuss the issue, it will find a way to be expressed in our conversation. Smith has many examples of situations in which public pronouncements indirectly (and not always too subtly) convey messages about key social facts. One striking example is of a conversation among some of Smith's students. Three students had turned up to a class on a harsh winter's morning, and the remaining four had not. Whilst they were waiting for the class to start a conversation ensued that included the following exchange: Amy : I heard a horrible story on the news, but I can't remember what it was. Michelle : There was this guy who drove up into the mountains with his three-year-old child. He went out hunting and left the kid all by himself in the truck. When he came back his son was frozen to death. He just went off to enjoy himself, and when he came back his son was dead. (p. 129) Smith claims that this conversation was a coded way of commenting on the absence of the other class members, and that the "man in the story appears to stand for the absent students and his abandoned child stands for the three students who turned up for class" (p. 130). Amy and Michelle would not necessarily have been aware of this, but their concerns were filtering through, none the less. It is clear from the above examples that Smith has retained much of the Freudian project. Here we have an attempt to uncover unconscious motivations by attending to the content of conversations, which is reminiscent of psychoanalysis. Analysing conversations in this manner, as Smith readily admits, appears to stretch credulity at points: what external measure do we have to validate such claims? Nonetheless, Smith is not putting this forward as a fait accomplis but rather as an open hypothesis for future refining and testing. Smith's thesis presents an interesting counter to many social scientists working in the constructionist tradition (see Dickins, 2004 for a discussion of this tradition and its weaknesses). In its mild form this tradition claims that much of our knowledge about the world is socially constructed in a language that does not directly represent reality. Instead, we create narratives that reflect our various interests, and that are malleable in the face of small-p and big-p political forces. Such narration impairs our ability to deliver objective knowledge about the world, according to some theorists. A typical (and adequate) retort to this position is to undermine the wholesale application of the concept of narration and present some form of realist philosophy of science. Smith has extended this reply by treating human narrative practices, in social situations, as a phenomenon to be explained; as something that is patterned, seemingly designed and therefore open to an adaptationist analysis. Smith has in effect asked the question - "if we generate narratives then what are their properties and how do we understand them?" His answer is that they are highly social and indirect forms of communication that are influenced by a Machiavellian module. Although the book is well written and engaging, it is not entirely clear how to relate the discussions of deception with the discussion of unconscious influences on our narratives. One possible link is through the discussion of self-deception, in which Smith outlines the familiar argument that the best way to deceive others is by deceiving ourselves. In this way we are so certain of the untruth that we will not give away any "tells" that might undermine the necessary deception. Such an idea is clearly an aspect of the relationship between personal- and sub-personal-level interactions; but functionally this is quite distinct from the indirect signalling functions of our narratives. At most, all that can be said is that both deception and indirect signalling are about social manipulation, but this is too coarse-grained analysis to yield a useful evolutionary psychology. Instead it seems that Smith has discussed two aspects of the evolutionary Freudian project. I opened this review by asking whether or not, as with the original Freudian project, evolutionary psychology could ever hope to deliver understanding of human troubles, and perhaps even some order of therapeutic intervention. Smith has not attempted to do this (despite a therapeutic background) but his thesis must surely be of interest to those involved in the "talking therapies"; indeed, some of Smith's examples come from therapeutic conversations. By turning an adaptationist eye to the possible sub-textual social signalling of our narratives we might begin to recognise patterns of expression that are indicative of malaise and low-mood. We might also begin to see how seemingly normal conversations between people, in whatever form of relationship, are encoding and signalling discontent and frustrations. Just as we are uncovering the necessary elements of emotional pain, so we might uncover the ordinary sub-personal signals of everyday conversation. References Dickins, T. E. (2004.) Social Constructionism as Cognitive Science. Journal for the Theory of Social Behaviour, 34 (4), 333-352 Wilson, T. D. (2002) Strangers to Ourselves: Discovering the Adaptive Unconscious. London: The Belknap Press of Harvard University Press. Citation Dickins, T. E. (2005). A Necessary Pain in the Heart. A Review of Why We Lie: The Evolutionary Roots of Deception and the Unconscious Mind by David Livingstone Smith. Evolutionary Psychology, 3:175-178. [9]Thomas Dickins References 9. mailto:t.dickins at uel.ac.uk From checker at panix.com Sun Jul 24 15:00:54 2005 From: checker at panix.com (Premise Checker) Date: Sun, 24 Jul 2005 11:00:54 -0400 (EDT) Subject: [Paleopsych] TLS: Carol Tavris: Happy? Message-ID: Carol Tavris: Happy? The Times Literary Supplement, 5.7.15 http://www.the-tls.co.uk/this_week/story.aspx?story_id=2111143 HAPPINESS Lessons from a new science Richard Layard 309pp. | Allen Lane. ?17.99. 0 713 99769 9. US: Penguin Press. $25.95. | 1 594 20039 4 MAKING HAPPY PEOPLE The nature of happiness and its origins in childhood Paul Martin 306pp. | Fourth Estate. ?15.99. | 0 00 712706 5 GOING SANE Adam Phillips 245pp. | Hamish Hamilton. ?14.99. 0 241 14209 1. US: Fourth Estate. | 0 007 15539 5 In the early 1970s, when a friend and I were newly hatched social psychologists, we decided to write a book on happiness. The head of an eminent Boston publishing house took pity on us and, over lunch, explained the facts of life. "No one wants to read a book on happiness", he said kindly. "Happy people don't; why in the world would they want to? They are already happy. Unhappy people don't want to, either. Why in the world would they want to read about happy people when they are feeling sullen and miserable? Moreover, it's faintly embarrassing to be seen on a bus or park bench reading a book on happiness. It's like being caught reading a book on paedophilia. A passer-by will question your motives." And so my friend and I went our separate ways; he to write a book on loneliness, and I, a book on anger. But time and psychology have marched on, and now we are in the midst of, if not a happiness epidemic, a happiness-book epidemic. The "positive psychology" movement, a contemporary incarnation of its humanist predecessors (though its proponents will wrestle you to the ground denying their heritage), is again eager to help people reach their "fullest potential", as Abraham Maslow advocated in the 1960s. This time around, however, the movement has produced a wave of research on who is happy, who isn't, and why. These researchers often manage to hold their professional meetings in places like Bermuda in the winter, which suggests that positive psychologists know how to practise what they preach. An inherent but debatable assumption in positive psychology is that happiness is a quality that can be fostered or suppressed, raised or lowered, by the conditions of our lives, our choices and our mental habits. There is considerable evidence, however, that each of us has something like a happiness thermostat that keeps us bubbling along at the level we were set to be. It drops during extreme conditions (war, violence, bereavement, chronic poverty) and rises during times of celebration, but otherwise remains steady in a middle range. Daniel Gilbert, a professor of psychology at Harvard whose book Stumbling on Happiness will be published next year, has found that most people assume that they will be emotionally devastated by misfortune, and so they overestimate the intensity and duration of breakups, divorces, financial losses, insults, injuries and trauma. People do suffer from these experiences, but eventually most return to normal, and sooner than they would have believed. "Our ability to spin gold from the dross of our experience means that we often find ourselves flourishing in circumstances we once dreaded", Gilbert has written. "We fear divorces, natural disasters and financial hardships until they happen, at which point we recognize them as opportunities to reinvent ourselves, to bond with our neighbors and to transcend the spiritual poverty of material excess." Most of us are basically happy, in other words, unless we suffer from chronic depression or are afflicted with the personality disposition that behavioural geneticists call "negative affectivity", a tendency to be crabby, critical, bitter and irritable no matter what happens. Richard Layard, an economist and member of the House of Lords, and Paul Martin, a behavioural biologist, have both produced cheerful, optimistic books that dispute this view of happiness. Marshalling studies of the social, psychological, economic, cognitive and neurological contributions to happiness, they make a case for building a society that can improve the happiness and well-being of its citizens as well as their material security. Layard and Martin write in a simple conversational style that is well suited to their subject, neither ponderous nor pretentious, though readers will not find here the elegance or wit - or scepticism - that philosophers through the ages have brought to this subject. In his Happiness: Lessons from a new science, Layard dispenses with definitions. By happiness, he means "feeling good - enjoying life and wanting the feeling to be maintained. By unhappiness I mean feeling bad and wishing things were different". (He does not consider those of us who feel good and wish things were different.) Martin, in Making Happy People, gets closer by defining happiness as a combination of pleasure, the absence of unpleasant emotions and pain, and the judgement that one's life is good. In his view, happiness consists of neither the mindless pursuit of pleasure nor of Spinoza's insensate "rational understanding of life and the world", but a blend of good feeling and smart thinking. It is more than the absence of unhappiness, just as health is more than the absence of disease - or, as Adam Phillips would add, just as sanity is more than the absence of insanity. Both Layard's and Martin's books make the case that happy people feel better, achieve more, create more, enjoy better health, live longer and make better friends and partners than the gloomy misanthropes among us. (Oh, all right, there is a place for people who are depressed, angry and rebellious, the ones out there raising hell about injustice and war, for example.) Martin lists the factors that contribute to happiness, but because the positive-psychology people tend to study attributes rather than individuals ("do happy people feel more in control of their lives than unhappy people?"), it is a long and overlapping list: connectedness, social and emotional competence, freedom from anxiety, communication skills, meaningful activity, a sense of control, a sense of purpose and meaning, resilience, self-esteem, optimism, having an outward focus, humour, playfulness, wisdom and "flow", engagement in an activity for its own sake. Also, it helps to get a good night's sleep and to exercise regularly. Oh, and it's also good if you can avoid spending too many hours commuting to work. And wait, education is critically important, too. After reading all of these ingredients of happiness, the reader may feel a need to simplify - say, by taking a nap or having a nice cup of tea and a scone. It is tempting to make fun of happiness books: they are such an easy target, soft and plump, just asking to be pinched. The new ones have the imprimatur of science on observations that have been made for centuries: money can't buy happiness; human beings need social bonds, satisfying work and strong communities; there is nothing good or bad but thinking makes it so; a life based entirely on the pursuit of money and pleasure ultimately becomes pleasureless. Layard and Martin's work, however, has the virtue of asking readers to think about why it is that, though we know what makes us happy, we consistently organize our lives and make choicesin such a way that makes us unhappy. The problem is comparable to the worldwide epidemic of obesity. Evolution has seen to it that most human beings gain weight when food is easily available, tasty, rich, varied and cheap, as it is in all developed nations today. When diets consist of the same food day after day, people habituate to what they are eating and eat less of it. As soon as food becomes more varied - as it is in the multi-ethnic choices now available in all big cities - people eat more and gain more weight. What then should be done, if anything, about obesity as a public health problem? Some individuals are able to summon the will-power to change their eating habits, but will-power won't go far on a global scale, not with the proliferation of the high-calorie, cheap fast food that humans love to eat, that the poor can afford, that many cultures equate with nurturance, and that makes billions for its marketers. With happiness as with food, what is and feels good in the short run is not always what is good over time. Consider television, which is to happiness what McDonald's is to slenderness. People enjoy television for many reasons, and even infants will turn to its rapidly changing colourful images as a plant does to the sun. In excess, television promotes passivity and anxiety, filling time that people might otherwise spend on activities that are intrinsically satisfying and create a sense of competence. Yet, given a choice, many people choose television and other narcotic pleasures that dull the mind and quell its restless search for meaning over activities that, in their complexity and challenge, offer the real promise of satisfaction. Likewise, the ubiquity of advertising - the engine that drives the marketplace - creates a craving for material things that promise happiness. A new thing will do so, for a while; then the purchaser habituates to it and soon needs another thing to boost happiness. The resulting "hedonic treadmill" is as likely to interfere with true happiness as a baby is with sex. Is this dilemma best left to each individual to handle or is it one that governments should tackle? Both Martin and Layard believe that governments can and should do more. "A radical pro-happiness government would acknowledge that rampant consumerism and advertising undermine unhappiness", Paul Martin argues, "and it might even consider using taxation or regulation to discourage them." Professor Layard takes it further, proposing that government should make the happiness of its citizens a primary goal, the heart of its public and economic policy, using laws and taxes to reward cooperation in pursuit of a common good, make work life more compatible with family life, help the poor, reduce rates of mental illness, subsidize activities that promote "community life", reduce commuting time, eliminate high unemployment, prohibit commercial advertising to children (as Sweden does) . . . . If the thermostat theory is right, none of this will raise the overall happiness level of the population, and some temperamentally grouchy people will complain that they miss the traffic, but who cares? Sign me up. The reason that social scientists have studied the negative side of human behaviour far more often than the positive side is apparent in Adam Phillips's Going Sane. If happiness is elusive, sanity is evanescent. "There is something about the whole notion of sanity that seems to make us averse to defining it", writes Phillips. Could it be because sanity isn't an "it"? Social scientists and psychiatrists can define and measure the many emotions that are incompatible with happiness (grief, bitterness, melancholy, worry and their kin) and the mental disorders they might agree are incompatible with sanity (schizophrenia and other psychoses), but there is a reason they have shied away from defining and measuring "normal" happiness and sanity - these are moral and philosophic concepts, not psychological or medical ones. "Sanities should be elaborated in the way that diagnoses of pathology are", AdamPhillips suggests; "they should be contested like syndromes, debated as to their causes and contributions and outcomes, exactly as illnesses are." Having left this daunting task to others, he ends up speaking of "the superficially sane" and the "deeply sane", whatever that distinction means. The book is full of the kind of psychoanalytic generalizations that may cause the reader temporary insanity: "So the sane have a sense that anything they want is either going to frustrate them because it isn't quite what they really want; or it is going to horrify them because it is more nearly what they want, and so they will be unable to enjoy it". Happiness and sanity? Let's be glad we know them when we feel them, and concentrate instead on ways of reducing pain, anguish and rage. Succeeding in that effort will do as much for human happiness as penicillin did for human health. From checker at panix.com Sun Jul 24 15:01:00 2005 From: checker at panix.com (Premise Checker) Date: Sun, 24 Jul 2005 11:01:00 -0400 (EDT) Subject: [Paleopsych] WP: Robert J. Samuelson: The End of Europe Message-ID: Robert J. Samuelson: The End of Europe http://www.washingtonpost.com/wp-dyn/content/article/2005/06/14/AR2005061401340_pf.html Wednesday, June 15, 2005; A25 Europe as we know it is slowly going out of business. Since French and Dutch voters rejected the proposed constitution of the European Union, we've heard countless theories as to why: the unreality of trying to forge 25 E.U. countries into a United States of Europe; fear of ceding excessive power to Brussels, the E.U. capital; and an irrational backlash against globalization. Whatever their truth, these theories miss a larger reality: Unless Europe reverses two trends -- low birthrates and meager economic growth -- it faces a bleak future of rising domestic discontent and falling global power. Actually, that future has already arrived. Ever since 1498, after Vasco da Gama rounded the Cape of Good Hope and opened trade to the Far East, Europe has shaped global history, for good and ill. It settled North and South America, invented modern science, led the Industrial Revolution, oversaw the slave trade, created huge colonial empires, and unleashed the world's two most destructive wars. This pivotal Europe is now vanishing -- and not merely because it's overshadowed by Asia and the United States. It's hard to be a great power if your population is shriveling. Europe's birthrates have dropped well below the replacement rate of 2.1 children for each woman of childbearing age. For Western Europe as a whole, the rate is 1.5. It's 1.4 in Germany and 1.3 in Italy. In a century -- if these rates continue -- there won't be many Germans in Germany or Italians in Italy. Even assuming some increase in birthrates and continued immigration, Western Europe's population grows dramatically grayer, projects the U.S. Census Bureau. Now about one-sixth of the population is 65 and older. By 2030 that would be one-fourth, and by 2050 almost one-third. No one knows how well modern economies will perform with so many elderly people, heavily dependent on government benefits (read: higher taxes). But Europe's economy is already faltering. In the 1970s annual growth for the 12 countries now using the euro averaged almost 3 percent; from 2001 to 2004 the annual average was 1.2 percent. In 1974 those countries had unemployment of 2.4 percent; in 2004 the rate was 8.9 percent. Wherever they look, Western Europeans feel their way of life threatened. One solution to low birthrates is higher immigration. But many Europeans don't like the immigrants they have -- often Muslim from North Africa -- and don't want more. One way to revive economic growth would be to reduce social benefits, taxes and regulations. But that would imperil Europe's "social model," which supposedly blends capitalism's efficiency and socialism's compassion. Consider some contrasts with the United States, as reported by the Organization for Economic Cooperation and Development. With high unemployment benefits, almost half of Western Europe's jobless have been out of work a year or more; the U.S. figure is about 12 percent. Or take early retirement. In 2003 about 60 percent of Americans ages 55 to 64 had jobs. The comparable figures for France, Italy and Germany were 37 percent, 30 percent and 39 percent. The truth is that Europeans like early retirement, high jobless benefits and long vacations. The trouble is that so much benevolence requires a strong economy, while the sources of all this benevolence -- high taxes, stiff regulations -- weaken the economy. With aging populations, the contradictions will only thicken. Indeed, some scholarly research suggests that high old-age benefits partly explain low birthrates. With the state paying for old age, who needs children as caregivers? High taxes may also deter young couples from assuming the added costs of children. You can raise two objections to this sort of analysis. First, other countries are also aging and face problems similar to Europe's. True. But the aging is more pronounced in Europe and a few other nations (Japan, for instance), precisely because birthrates are so low. The U.S. birthrate, for example, is 2.1; even removing births to Hispanic Americans, it's about 1.9, reports Nicholas Eberstadt of the American Enterprise Institute. Second, Europeans could do something about their predicament. Also, true -- they could, but they're not. A few countries (Britain, Ireland, the Netherlands) have acted, and there are differences between Eastern and Western Europe. But in general Europe is immobilized by its problems. This is the classic dilemma of democracy: Too many people benefit from the status quo to change it; but the status quo isn't sustainable. Even modest efforts in France and Germany to curb social benefits have triggered backlashes. Many Europeans -- maybe most -- live in a state of delusion. Believing things should continue as before, they see almost any change as menacing. In reality, the new E.U. constitution wasn't radical; neither adoption nor rejection would much alter everyday life. But it symbolized change and thereby became a lightning rod for many sources of discontent (over immigration in Holland, poor economic growth in France). All this is bad for Europe -- and the United States. A weak European economy is one reason that the world economy is shaky and so dependent on American growth. Preoccupied with divisions at home, Europe is history's has-been. It isn't a strong American ally, not simply because it disagrees with some U.S. policies but also because it doesn't want to make the commitments required of a strong ally. Unwilling to address their genuine problems, Europeans become more reflexively critical of America. This gives the impression that they're active on the world stage, even as they're quietly acquiescing in their own decline. From checker at panix.com Sun Jul 24 15:01:11 2005 From: checker at panix.com (Premise Checker) Date: Sun, 24 Jul 2005 11:01:11 -0400 (EDT) Subject: [Paleopsych] NYT: Who's Afraid of China Inc.? Message-ID: Who's Afraid of China Inc.? http://www.nytimes.com/2005/07/24/business/yourmoney/24oil.html By [3]STEVE LOHR WILLIAM A. REINSCH, an avowed free trader, welcomes China's rising stature in the international economy. After all, he is the president of the National Foreign Trade Council, an organization founded in 1914 to promote an "open world trading system." Indeed, when he was a senior trade official in the Clinton administration, Mr. Reinsch was chided by some security analysts who said he was being soft on China by placing matters of commerce ahead of national security. But even Mr. Reinsch is uneasy about China's attempt to buy [4]Unocal, a midsize American oil company. The outcome of the takeover contest for Unocal is uncertain, and last week its board embraced an improved offer from Chevron. Yet Cnooc, a government-backed Chinese oil company, still has the higher offer - and it could up the ante. If the Chinese bid proceeds, Mr. Reinsch wants to see a thorough national security review of the deal, one that goes beyond the usual focus on weapons technology to include energy security. "Our Army, Navy and Air Force run on oil," he explained. Oil is the ultimate geopolitical commodity - it is "The Prize," as Daniel Yergin titled his epic history of petroleum and international politics. And even if Cnooc fails to grab Unocal, the pursuit has pushed the two sides of the Chinese challenge together and into the spotlight of public debate. For China is both an engine of economic globalization and an emerging military power. In symbolic shorthand, it is [5]Wal-Mart with an army. The two sides aren't neatly divided. But those who focus on economics tend to see partnership, cooperation and reasons for optimism despite tensions, while security experts are more pessimistic and anticipate strategic conflict as the likely future for two political systems that are so different. In China, there are also two camps - the security hawks and the economic modernists, according to China analysts. The modernists see China joining the United States as the second great economic power of the 21st century, and the two nations sharing the gains from increased trade ties and global growth. The hawks regard that view as na?ve, and fret that American policy is to remain the world's only superpower and to curb China's rise. So China's response, the hawks say, is to try to erode United States hegemony and reduce America's power to hold China down. Both faces of China have been evident recently. Two weeks ago, a senior Chinese military official, Maj. Gen. Zhu Chenghu, said China should use nuclear weapons against the United States if the American military intervenes in any conflict over Taiwan. Then, bowing to pressure from the United States and other trading partners, China announced last Thursday that it would no longer peg its currency tightly to the dollar. It is a measured step, and it will not do much to moderate China's huge trade surplus with the United States anytime soon. But the move is a sign of flexibility and accommodation. "Do we see each other inevitably as antagonists, or do we see a world of globalization from which both sides benefit? That is the big issue," said Kenneth Lieberthal, a senior official in the National Security Council during the Clinton administration. "And that framework, one way or another," added Mr. Lieberthal, a China analyst and a professor at the University of Michigan business school, "will drive an enormous number of policy decisions." So that is the China question: Is it an opportunity or a threat? If nothing else, the Cnooc bid for Unocal has shown how unsettled American thinking is on China and how deep the anxieties run, both in matters of national security and trade. It is easy to dismiss Washington as a hot-air factory, but the scope of the outcry in Congress is significant. Resolutions and legislative proposals, all critical of Cnooc's takeover bid, have piled up in the House and Senate, from Republicans and Democrats. A resolution presented last month by Representative Richard W. Pombo, a California Republican, declared that permitting the Chinese company to buy Unocal would "threaten to impair the national security of the United States." It passed, 398 to 15. Senator Byron Dorgan, a North Dakota Democrat, has drafted three pieces of anti-Cnooc legislation that range from calling for a six-month Congressional inquiry into the bid to a bill that would prohibit the deal. Mr. Dorgan objects to the Chinese move on fair-trade grounds. The Chinese government, he says, would not allow an American company to buy a Chinese oil company. "So why on earth should they be able to buy an American oil company?" Mr. Dorgan said. Yet the Chinese takeover bid taps into a deeper concern about trade and globalization for Mr. Dorgan. He talks of manufacturing jobs lost to China, intellectual-property pirates in China illegally copying American movies and software, and a trade deficit with China that is rising astronomically with no end in sight. "Trade should be mutually beneficial, and it is certainly not with China," Mr. Dorgan said. The tempest in Congress has increased the political risks surrounding the Cnooc bid. At $18.5 billion, the bid remains higher than Chevron's sweetened offer of $17 billion. But Wall Street analysts say Cnooc will have to go higher to have a chance to win, offering a sizable premium over the Chevron bid to compensate for delays of a government review of the Chinese offer or even the possibility that Washington may block a Chinese deal. It would be an extreme step, but Congress has the power to "regulate commerce with foreign nations," under Article I, Section 8 of the Constitution. "My sense is that Congress is not going to stand still for a Cnooc takeover being approved," said C. Richard D'Amato, chairman of the United States-China Economic and Security Review Commission, an advisory group to Congress. "That is the political reality." Cnooc and its advisers misread the political environment in Washington. Fu Chengyu, the Cnooc chairman who earned a graduate degree from the University of Southern California, has said he was surprised by the intensity of political criticism. Cnooc's path would have been smoother if it had joined with an American oil company as a partner in its bid, an option that was considered briefly but rejected, according to a person close to the company. The idea, the person said, would have been that the American company would acquire Unocal's assets in the United States, while Cnooc took the main prize in the deal - Unocal's offshore natural gas fields in Asia and its expertise in offshore exploration and production. The gas reserves and skill are considered strategic to China's goal of moving away from coal and generating 20 percent of the nation's electricity from natural gas by 2020. "It would have been better to have not made this big move a head-on attack, to have linked up with an American partner so the deal would have been less threatening and less a lightning rod for China politics in the United States," the person said. Perhaps, but many economists and trade specialists contend that the American angst over the Cnooc bid says more about the United States than it does about China or Cnooc's tactics. "All this really points to the anxieties about globalization in our own society," said Clyde V. Prestowitz, a trade official in the Reagan administration and president of the Economic Strategy Institute in Washington. "We are so economically interdependent with China now and we chose that path." Washington pushed for China's integration into the international economy and its entry into the World Trade Organization in 2001. American companies have farmed out much of their manufacturing to Chinese factories. American consumers have been on a Chinese shopping spree for years, buying everything from clothes to computers made there. That is why the United States had a record $162 billion trade deficit with China last year. China sits on $700 billion in foreign exchange reserves, mostly in dollars. It recycles those funds in good part by investing in United States Treasury bonds; that keeps American interest rates low, fueling the real estate boom. "We handed China the money they are using to try to buy Unocal," said Mr. Prestowitz, author of a new book on the shift of wealth and power to Asia, "Three Billion New Capitalists" (Basic Books, 2005). "And now we're telling the Chinese, please keep investing in our bonds but you can't invest what amounts to a sliver of their surplus in an oil company. That's really confused and hypocritical on our part." Where others see muddle, R. James Woolsey, director of the Central Intelligence Agency in the Clinton administration, sees strategic clarity in challenging the Cnooc bid. Oil is a globally traded commodity, Mr. Woolsey concedes, but it is also a strategic resource in a market that is tightening because of rising demand from fast-growing nations like China and India. That, Mr. Woolsey says, is before one begins thinking of the possible impact of, say, an act of terrorist sabotage in a crucial Middle East oil field. "China is realistically assuming there may be a shortage of oil," said Mr. Woolsey, a vice president in the Booz Allen Hamilton consulting firm. In China, Mr. Woolsey sees a nation with military ambitions to challenge the United States, and a political system with little regard for human rights and free speech. Cnooc, in Mr. Woolsey's view, is the corporate vehicle of "a Communist dictatorship." The Cnooc move, according to Frank Gaffney Jr., a senior Defense Department official in the Reagan administration, is a step to ensure that China has the resources for its overarching national design. "China's strategy is to supplant the United States as the premier economic power in the world and, should it become necessary, defeat us militarily," said Mr. Gaffney, president of the Center for Security Policy. The strategic concern was much narrower at William Blair & Company. Until recently, William Blair, the investment firm in Chicago, was the largest outside shareholder in Cnooc, which is majority-owned by the Chinese government. But William Blair sold off its stake, worth about $160 million, in recent weeks because of worries that Cnooc was behaving too much like a state-owned company and not enough like a capitalist enterprise trying to maximize returns to shareholders, explained David Merjan, a fund manager at the firm. The pricey bid for Unocal, Mr. Merjan said, raised doubts about how independent Cnooc really was from the Chinese government. "If China is going to sell shares in a company like Cnooc to outside shareholders, it should not be run for the benefit of Chinese economic policy," Mr. Merjan said. [6]CNOOC and its pursuit of Unocal, it seems, are part of China's evolutionary path. Cnooc is playing its hand with plenty of government help, about $7 billion in loans on terms Western oil companies could not hope to get. Accordingly, Cnooc may be willing and able to overpay. Yes, China is hunting for oil and gas assets around the world as a national priority. Still, that is happening in a nation that is drifting steadily toward a market economy, though one with more central control than Americans view as a free-market economy. The Chinese Communist Party, with 60 million members - more than the population of France - does guide the economy, if less and less over time. "But think of it as the Chinese bureaucratic capitalist party," said Mr. Lieberthal of the University of Michigan. "It has nothing really to do with Communism." Mr. Lieberthal counts himself as among the optimists on China. Globalization, he says, and continued integration of the Chinese and American economies can work to mutual benefit. The spread of middle-class affluence and education across more of the Chinese population should eventually be a force for democratic liberalization, following the pattern of Taiwan and South Korea. "Am I a hundred percent sure I'm right? No, but that's the long-term bet I'd make," Mr. Lieberthal said. "And if you let the pessimists - the people who believe that the U.S. and China will inevitably be enemies - drive policy, then the outcome will be the one they predict." China's pursuit of Unocal puts some Wall Street firmsin an awkward situation. DealBook, Page 5. From checker at panix.com Sun Jul 24 15:04:02 2005 From: checker at panix.com (Premise Checker) Date: Sun, 24 Jul 2005 11:04:02 -0400 (EDT) Subject: [Paleopsych] NYT: Mystery Woodpecker Upends a Bird Lover's Life Message-ID: Mystery Woodpecker Upends a Bird Lover's Life http://www.nytimes.com/2005/07/24/science/24bird.html [Where does the official skeptic, Michael Shermer, stand on this?] By [3]JAMES GORMAN HUNTSVILLE, Ala., July 23 - In the church of birds, where passions run high and prophets emerge from swamps and thickets with revelations, nothing can ruin a reputation like admitting that you have seen an ivory-billed woodpecker. Bobby Harrison, a large, gentle man with thinning hair and a soft Alabama drawl, knows this and can recite the casualties. Consider John V. Dennis, one of Mr. Harrison's heroes. He took the last accepted photograph of an ivory bill in Cuba in 1948. But when he testified to seeing one in the Big Thicket area of southeast Texas in 1966, he was ridiculed. Even worse, at a 1971 meeting of ornithologists, George H. Lowery Jr., head of the Louisiana State Museum of Natural Science, presented what he was convinced were photographs of an ivory bill, taken by an acquaintance he would not name at a location he would not specify. "Look at what happened to him," Mr. Harrison said, sitting in his office here at Oakwood College, where he teaches photography. "He was just ostracized by the ornithological community for the rest of his life." Mr. Harrison is willing to take the risk. He has had a major part in the most recent report that the ivory bill lives and now, after a period of acceptance and celebration, some scientists and birders are questioning the strength of the evidence: a videotape of a bird and eyewitness accounts. What the critics want is an absolutely clear photograph and a bird that can be seen repeatedly by a variety of observers. It is 17 months since the day - Feb. 27, 2004 - when he and Tim Gallagher of the Cornell Lab of Ornithology were paddling a canoe in the Cache River National Wildlife Refuge in eastern Arkansas, bumping into cypress trees and searching tall tupelos for some hint of an ivory bill. They were following up on the report of Gene Sparlin, a kayaker who had seen some sort of bird but was not sure what it was. "We knew what we were looking for," Mr. Harrison said. Then a bird appeared in the distance and he and Mr. Gallagher watched its flight, wondering what it was. "As soon as it broke over the bayou and tipped, I knew what it was," Mr. Harrison said. When it flew over land, they tried to chase it through the swamp, running over the wet ground, carrying binoculars and notebooks. Finally they stopped, he said, and he wept. Recalling the moment in an interview, he choked up again. Like other birders, Mr. Harrison developed his passion early in life. He has been looking for an ivory bill since 1972, when he was 17. He is a particular species of birder; he has always had a single-minded dedication to one bird. It is no surprise that he picked the ivory bill. It was - or is - the largest American woodpecker and has long haunted the imaginations of birders because of its elegance and its disappearance. He took a video of another ivory bill sighting, one that has not been widely released, that he has provided to the Cornell Lab. The video, played at normal speed, shows about a quarter-of-a-second glimpse of something fast flying by a tree where he had placed a decoy bird. Shown in slow motion after some technical manipulation to separate each frame, the video shows a black and white bird. This is not what he wants. He wants to get a photograph that nobody can argue with, the kind that does not need an expert to interpret it, so that the average person can clearly see the bird. He will be back in the swamp in Arkansas in August and this fall, and in other swamps after that. He knows he has seen the bird. "I've waited all my life for this," he said. "Still haven't got that photograph I want." Mr. Harrison said he always called the people who had seen ivory bills "the chosen few." "And I was one of the chosen," he said. "It's a moment I waited for most of my adult life. And it happened. Never thought it would really happen." The sighting that day was the beginning of a major - and secret - search, by a team of experts from the Cornell Lab and other groups. It culminated last April in a public announcement and a paper by a gaggle of experts in the June 3 issue of Science. The ivory-billed woodpecker, the group reported, was alive. Unlike reports of past sightings, this one seemed so solid that it provoked only elation, a public sigh of relief and wonder. The re-discoverers floated on the almost palpable gratitude of birders and others who treated the news as a sign of hope. Until now. Three scientists have a paper in the works at the Public Library of Science challenging the report in Science. No details have been released, but there are other signs of doubt. David Allen Sibley, the prominent American birder and the author of popular field guides, said Thursday that he had concluded that in the Science paper, "the evidence they've presented falls short of proof." Mr. Sibley said he decided this independently of the three scientists who wrote the rebuttal, although he had been in contact with them. Kenn Kaufman, another major birding author, also said in an interview that he was not satisfied with the evidence. Although he said he believed the sighting was real, he did not think the re-discoverers had proved their case. Mr. Harrison said that he could not comment on an unpublished paper, but that he was confident in the finding, and welcomed a scientific discussion. "I'm surprised it didn't happen sooner," Mr. Harrison said. Nor do the critics question his integrity or that of Mr. Gallagher or of the other authors of the Science paper. "The people who originally announced this thoroughly believe they got an ivory-billed woodpecker," said Mark B. Robbins of the University of Kansas, one of the three scientists preparing the challenge to the Science report. "They believe one thing, we believe another. This is how science plays out, the fabric of science getting at the truth." Except that with the ivory bill, nothing is ever business as usual. Even when it was common, the bird had a certain majesty and mystery. For the last 50 years it has been a symbol of loss, and of human failure. Most people were afraid to hope. So the report in Science, reviewed by other researchers, with multiple sightings over the course of a year by respected observers, and a blurry videotape that was exhaustively analyzed, was greeted with almost religious fervor. Mr. Kaufman described the initial reaction as: "The bird is back from the grave. Eureka! We're saved." Pete Dunne, vice president of the New Jersey Audubon Society and a prolific author on birds, said he was one of many who thought the ivory bill was gone for good. "If someone had said to me, what was more likely, the rediscovery of the ivory-billed woodpecker or the Second Coming, unhesitantly I would have gone to the latter." He is now a firm believer. "The credentials of the people who saw this are stellar," he said. Usually, scientists and birders are skeptical. In fact, Mr. Kaufman said, "I've actually been shocked that virtually everyone has been embracing this." He added, "I do in fact believe that there was a bird there last year, but it hasn't been proven and we could have a more honest discussion if people accept the fact that we don't have proof." Mr. Sibley is unconvinced. At first, he, too, was elated, and went down to Arkansas for 10 days to look for the ivory bill without success. It was only when he returned, he said, that he began to think critically about the Science report. "It's really crushing to come to the conclusion that it might not be true, that there is room for some reasonable doubt." He has been reluctant to speak publicly about his doubts, and described doubters as being treated as "heretics" in online discussions. The reason he is speaking out now, he said, is that he worried that money might be diverted from other conservation efforts. What he said he wanted, for proof, was "redundancy. Repeated sightings by independent observers of birds really well seen." This is what Mr. Harrison wants, more than anything. And he understands the skeptics, because he has been one. But this time, he and his colleagues are following in the long tradition of Mr. Dennis and the late Dr. Lowery. "I know the bird is there," he said. From checker at panix.com Sun Jul 24 15:04:08 2005 From: checker at panix.com (Premise Checker) Date: Sun, 24 Jul 2005 11:04:08 -0400 (EDT) Subject: [Paleopsych] Sify: Now, Weblogs for extraterrestrials Message-ID: Now, Weblogs for extraterrestrials http://sify.com/finance/fullstory.php?id=13900526 Thursday, 21 July , 2005, 11:39 Is there anybody out there? And if there is, what will alien life forms make of bloggers signed up to new service to beam their online rantings into outer space? "We are giving bloggers the opportunity to send a piece of their lives into space to potentially connect with extraterrestrials," says Ted Murphy of the Florida-based firm MindComet. "I've always believed that other intelligent life forms are out there, and now, for the first time, they will be able to peer into the life of average Homo sapiens," Murphy says. Humans have been sending television and radio signals into space for decades. Now MindComet wants to offer any aliens a new way of looking at Earth. "This program gives us the opportunity to show our race in a different light," Murphy says. While the service says it supports "intergalactic free speech", bloggers are asked to keep their contributions "clean". Extraterrestrials could have sensitive ears, and online weblogs should be suitable for an alien family audience, and without explicit content. "Aliens may find your lifestyle, grammar or the picture of your girlfriend offensive, we just don't know," the website says. Bloggers must also promise not to make money from aliens. "The site may not be used to sell products or services to alien life forms or to increase alien traffic to your blog," according to the terms of use. According to the latest statistics there are 178 blog feeds, from 14 countries and in four languages, registered with the site. From checker at panix.com Sun Jul 24 15:04:20 2005 From: checker at panix.com (Premise Checker) Date: Sun, 24 Jul 2005 11:04:20 -0400 (EDT) Subject: [Paleopsych] NYT Op-Ed: Scaring Us Senseless Message-ID: Scaring Us Senseless http://www.nytimes.com/2005/07/24/opinion/24taleb.html By NASSIM NICHOLAS TALEB Glasgow I WAS visiting London last Thursday when a second wave of attacks hit the city, just two weeks after the traumatic events of July 7. It is hard to avoid feeling vulnerable to this invisible enemy who does not play by known or explicit rules. Of course, that is precisely the anxiety that terrorists seek to produce. But its opposite - complacency - is not an option. The truth is that neither human beings nor modern societies are wired to respond rationally to terrorism. Vigilance is easy to muster immediately after an event, but it tends to wane quickly, as the attack vanishes from public discourse. We err twice, first by overreacting right after the disaster, while we are still in shock, and later by under-reacting, when the memory fades and we become so relaxed as to be vulnerable to further attacks. Terrorism exploits three glitches in human nature, all related to the management and perception of unusual events. The first and key among these has been observed over the last two decades by neurobiologists and behavioral scientists, who have debunked a great fallacy that has marred Western thinking since Aristotle and most acutely since the Enlightenment. That is to say that as much as we think of ourselves as rational animals, risk avoidance is not governed by reason, cognition or intellect. Rather, it comes chiefly from our emotional system. Patients with brain lesions that prevent them from registering feelings even when their cognitive and analytical capacities are intact are incapable of effectively getting out of harm's way. It is largely our emotional toolkit, and not what is called "reason," that governs our capacity for self-preservation. Second, this emotional system can be an extremely na?ve statistician, because it was built for a primitive environment with simple dangers. That might work for you the next time you run into a snake or a tiger. But because the emotional system is impressionable and prefers shallow, social and anecdotal information to abstract data, it hinders our ability to cope with the more sophisticated risks that afflict modern life. For example, the death of an acquaintance in a motorcycle accident would be more likely to deter you from riding a motorcycle than would a dispassionate, and undoubtedly far more representative, statistical analysis of motorcycles' dangers. You might avoid Central Park on the basis of a single comment at a cocktail party, rather than bothering to read the freely available crime statistics that provide a more realistic view of the odds that you will be victimized. This primacy of the emotions can distort our decision-making. Travelers at airports irrationally tend to agree to pay more for terrorism insurance than they would for general insurance, which includes terrorism coverage. No doubt the word "terrorism" can be specific enough to evoke an emotional reaction, while the general insurance offer wouldn't awaken the travelers' anxieties in the same way. In the modern age, the news media have the power to amplify such emotional distortions, particularly with their use of images that go directly to the emotional brain. Consider this: Osama bin Laden continued killing Americans and Western Europeans in the aftermath of Sept. 11, though indirectly. How? A large number of travelers chose to drive rather than fly, and this caused a corresponding rise in casualties from automobile accidents (any time we drive more than 20 miles, our risk of death exceeds that of flying). Yet these automobile accidents were not news stories - they are a mere number. We have pictures of those killed by bombs, not those killed on the road. As Stalin supposedly said, "One death is a tragedy; a million is a statistic." Our emotional system responds to the concrete and proximate. Based on anecdotal information, it reacts quickly to remote risks, then rapidly forgets. And so the televised images from bombings in London cause the people of Cleveland to be on heightened alert - but as soon as there is a new tragedy, that vigilance is forgotten. The third human flaw, related to the second, has to do with how we act on our perceptions, and what sorts of behavior we choose to reward. We are moved by sensational images of heroes who leap into action as calamity unfolds before them. But the long, pedestrian slog of prevention is thankless. That is because prevention is nameless and abstract, while a hero's actions are grounded in an easy-to-understand narrative. How can we act on our knowledge of these human flaws in order to make our society safer? The audiovisual media, with their ability to push the public's emotional hot buttons, need to play a more responsible role. Of course it is the news media's job to inform the public about the risk and the incidence of terrorism, but they should try to do so without helping terrorists achieve their objective, which is to terrify. Television images, in all their vividness and specificity, have an extraordinary power to do just that, and to persuade the viewer that a distant risk is clear and present, while a pressing but underreported one is nothing to worry about. Like pharmaceutical companies, the news media should study the side effects of their product, one of which is the distortion of the viewer's mental risk map. Because of the way the brain is built, images and striking narratives may well be necessary to get our attention. But just as it takes a diamond to cut a diamond, the news industry should find ways to use images and stories to bring us closer to the statistical truth. Nassim Nicholas Taleb, who teaches risk management at the University of Massachusetts at Amherst, is the author of "Fooled by Randomness: The Hidden Role of Chance in Life and the Markets." From haskellre at tampadsl.net Sun Jul 24 16:18:11 2005 From: haskellre at tampadsl.net (Robert E. Haskell) Date: Sun, 24 Jul 2005 12:18:11 -0400 Subject: [Paleopsych] H-N: Thos. E. Dickins: A Necessary Pain in the Heart In-Reply-To: References: Message-ID: <6.2.3.4.0.20050724121536.032086b0@mail.tampadsl.net> I find that I must respond to this post?something which rarely--- if ever--- I have done. In the course of an otherwise excellent understanding and review of my colleague, David Smith's book Why We Lie, a long standing pet peeve of mine was introduced which I would like to take the opportunity clarify. I do this because the object of my pet peeve generally haunts the research on unconscious processing, and more specifically continues to haunt my own work on unconscious communications?which David so kindly cites. To wit: When the term ?unconscious? is encountered, it is most always is automatically assumed to be of the Freudian kind. The short of it is that while most psychoanalytic concepts are based in unconscious processing, not all that is unconscious processing is based in psychoanalysis (While all As are Bs, not all Bs are As). The concept of the unconscious was around before Freud (granted, he systematized the concept within a specific framework). See, Ellenberger?s The Discovery of the Unconscious, and LL Whyte?s The Unconscious Before Freud. And certainly, modern concept of a cognitive unconscious in cognitive science has a wealth of non Freudian research and theory. For an extended explanation of this issue, see my Chapter Five, ?Discovering Deep Listening: What Freud Didn?t Know But Almost Did?and Should Have? of Haskell, R. E. (2001). Deep listening: Uncovering hidden meaning in conversations. Cambridge, MA: Perseus books. See also the following where I point out that unconscious communications do not need to be linked to psychoanalytic theory. Haskell, R.E. (1999). Unconscious Communication: Communicative Psychoanalysis and Sub-literal Cognition. Journal of the American Academy of Psychoanalysis, 27, (3), 471-502. See too, Chapter Twelve of : Haskell, R. E. (1999). Between the Lines: Unconscious meaning in everyday conversation. New York: Plenum/Insight Books (taken out of print by author but still available on the web). Finally, see, the following framework for unconscious communications. Haskell, R.E. .(2003) A Logico-mathematic, Structural Methodology: Part I, [of III]. The Analysis and Validation of Sub-literal (SubLit) Language and Cognition. Journal of Mind and Behavior, 24 (3/4) 347-400. If list members are interested in this issue, I would more than gladly cite some of the above material and clarify further. Thanks. At 11:00 AM 7/24/2005, Premise Checker wrote: >Thos. E. Dickins: A Necessary Pain in the Heart >http://human-nature.com/ep/reviews/ep03175178.html >5.7.10 >[Thanks to Laird for this.] > > Evolutionary Psychology 3: 175-178 > > Book Review > A Necessary Pain in the Heart* > > A Review of Why We Lie: The Evolutionary Roots of Deception and the > Unconscious Mind by David Livingstone Smith. New York: St Martin's > Press. ISBN 0-312-31039, 2004. > > Thomas E. Dickins, School of Psychology, University of East London and > Centre for Philosophy of Natural and Social Science, London School of > Economics, London E15 4LZ, United Kingdom. > > * This is a line from Stevie Wonder's song Ordinary Pain, on his album > Songs in the Key of Life, 1976. This song advocates a stringent > functionalism about emotional responses. > > Six years ago, at the annual Human Behaviour and Evolution Society > conference, I sat down to dinner with a group of fellow evolutionary > behavioural scientists. Everyone was in high conference spirits and > everyone at my table was male. Soon conversation moved from social > gossip about fellow delegates to talking about relationships, and one > of our number posed the question "does working in this field hinder > your romantic relationships?" The table was divided, with half of the > men claiming no influence whatsoever, for in those intimate > circumstances their behaviours simply played out naturally. The other > half saw knowledge about evolved mating behaviours as a hindrance to > their interactions, for they often failed to seize the moment and > instead went off-line and observed the interaction with a critical > eye. I placed myself in this latter camp. > > At the time the conversation was an amusing conceit and I thought > little more about it. But during the course of the following six years > my personal life continued and, as it turned out, my relationship > history unfolded somewhat unfortunately. When my wife and I separated > I quite naturally tried to think about the situation from an > evolutionary perspective and I asked myself whether I could > conceptualize the failure in our relationship in terms of what I knew > about mating decisions. Of course, I soon chastised myself for trying > to jump from statements about human universals to an analysis of the > fine-grained sequences of behaviour that constituted my marriage. > Nonetheless, I had started down a particular road in my thinking. I > did not understand the nature of the emotional pain I felt, but I > recognised that it was patterned; I did not understand my motivations > for saying certain things during the separation process, but I saw > that they achieved certain specific effects. Surely, these things were > not idiosyncratic to me and surely there must be a functional story to > tell about this aspect of psychology? > > Whilst I reflected on this I remembered the conversation at the > conference and realised that no one on the table had claimed that an > evolutionary perspective could help a relationship; the only expressed > options were no effect or hindrance. Perhaps, I rather grandly > reasoned, an evolutionary account of the emotions felt around > separation might form the foundation of a useful therapeutic tool. > During discussions with a number of patient colleagues, one of them > reminded me of Freud's ambitions. Freud had hoped to integrate an > account of personal-level psychological machinations with contemporary > neurological science. Freud, of course, failed in this attempt but his > expression of the problem can only be seen as useful. > > Recently, Timothy D. Wilson (2002) has more formally resurrected > Freud's project in his book Strangers to Ourselves. The subtitle of > this book is Discovering the Adaptive Unconscious, which captures > Wilson's thesis that much of our psychology is unconscious and adapted > to solve specific problems. Our conscious, or personal-level > processing is perhaps best seen as a calibrational tool, or set of > tools, that finesses work done by the unconscious. In making this > claim, Wilson brings the Freudian project into contact with modern > evolutionary approaches. However, Wilson does not offer a detailed > adaptationist analysis of our unconscious psychology and instead > tantalisingly hints at a variety of possible functions that are served > by such processes. David Livingstone Smith's book, on the other hand, > sets out to achieve an adaptationist decomposition of at least one > aspect of our unconscious psychology; that which delivers/underlies > social manipulation. > > The first half of the book is an introduction to evolutionary > psychology and to theories of deception and self-deception. It is from > this half that the book gains its title. For those well versed in > evolutionary approaches to the behavioural sciences this can be > skipped: however, for those who are not, its light touch and pace will > bring them rapidly to a point Smith's core thesis can be digested. > > It is as follows. We tell stories; or rather we construct narratives > about much of what goes on in our lives. These narratives are for our > own private consumption, to explain events as well as to shape and > predict futures. Our stories find public uses too, for they act as > communicative structures. However, most of our conversational > machinations are not, in fact, under personal-level control, but > instead are under the unconscious or sub-personal-level control of a > social module. This module is a domain-specific device, in keeping > with contemporary assumptions in Evolutionary Psychology, and it > delivers (small-p) political insight, in keeping with the > Machiavellian Intelligence Hypothesis, as well as more general social > scanning. The key point is that this module renders us highly > sensitive to other people and it influences our narration in such a > way as to deliver unintended messages. At the personal-level we tell > ourselves we are delivering message x, but our sub-personal-level > cognition is in fact causing us to send message y. > > An example of such coded communication happened when I once entered a > public house in the U.K. with a fellow academic. We were engaged in a > debate about some aspect of cognition, vigorously disagreeing with > each other while we found somewhere to sit. We eventually perched on > the edge of a shared bench and continued arguing, seemingly oblivious > to the world around us. At one point my antagonist, who had grown > frustrated with my line of argument, declared that "your argument is > about as useful as a one-armed man on a building site." Sitting next > to him, further along the bench, was a one-armed man who was clearly > manipulating pints, wallets, cash and handshakes in a different manner > than most. No one had mentioned this man, and my protagonist claimed > he had not even seen him; but, according to Smith, the likelihood is > that my colleague had seen him, had registered his loss of an arm, and > had had a series of thoughts about the consequences of such an injury. > Such features are of importance to a social animal, and according to > Smith, are the kind of thing we might comment on to the extent that > even if we do not directly discuss the issue, it will find a way to be > expressed in our conversation. > > Smith has many examples of situations in which public pronouncements > indirectly (and not always too subtly) convey messages about key > social facts. One striking example is of a conversation among some of > Smith's students. Three students had turned up to a class on a harsh > winter's morning, and the remaining four had not. Whilst they were > waiting for the class to start a conversation ensued that included the > following exchange: > > Amy > : I heard a horrible story on the news, but I can't remember what > it was. > > Michelle > : There was this guy who drove up into the mountains with his > three-year-old child. He went out hunting and left the kid all by > himself in the truck. When he came back his son was frozen to > death. He just went off to enjoy himself, and when he came back his > son was dead. (p. 129) > > Smith claims that this conversation was a coded way of commenting on > the absence of the other class members, and that the "man in the story > appears to stand for the absent students and his abandoned child > stands for the three students who turned up for class" (p. 130). Amy > and Michelle would not necessarily have been aware of this, but their > concerns were filtering through, none the less. > > It is clear from the above examples that Smith has retained much of > the Freudian project. Here we have an attempt to uncover unconscious > motivations by attending to the content of conversations, which is > reminiscent of psychoanalysis. Analysing conversations in this manner, > as Smith readily admits, appears to stretch credulity at points: what > external measure do we have to validate such claims? Nonetheless, > Smith is not putting this forward as a fait accomplis but rather as an > open hypothesis for future refining and testing. > > Smith's thesis presents an interesting counter to many social > scientists working in the constructionist tradition (see Dickins, 2004 > for a discussion of this tradition and its weaknesses). In its mild > form this tradition claims that much of our knowledge about the world > is socially constructed in a language that does not directly represent > reality. Instead, we create narratives that reflect our various > interests, and that are malleable in the face of small-p and big-p > political forces. Such narration impairs our ability to deliver > objective knowledge about the world, according to some theorists. A > typical (and adequate) retort to this position is to undermine the > wholesale application of the concept of narration and present some > form of realist philosophy of science. Smith has extended this reply > by treating human narrative practices, in social situations, as a > phenomenon to be explained; as something that is patterned, seemingly > designed and therefore open to an adaptationist analysis. Smith has in > effect asked the question - "if we generate narratives then what are > their properties and how do we understand them?" His answer is that > they are highly social and indirect forms of communication that are > influenced by a Machiavellian module. > > Although the book is well written and engaging, it is not entirely > clear how to relate the discussions of deception with the discussion > of unconscious influences on our narratives. One possible link is > through the discussion of self-deception, in which Smith outlines the > familiar argument that the best way to deceive others is by deceiving > ourselves. In this way we are so certain of the untruth that we will > not give away any "tells" that might undermine the necessary > deception. Such an idea is clearly an aspect of the relationship > between personal- and sub-personal-level interactions; but > functionally this is quite distinct from the indirect signalling > functions of our narratives. At most, all that can be said is that > both deception and indirect signalling are about social manipulation, > but this is too coarse-grained analysis to yield a useful evolutionary > psychology. Instead it seems that Smith has discussed two aspects of > the evolutionary Freudian project. > > I opened this review by asking whether or not, as with the original > Freudian project, evolutionary psychology could ever hope to deliver > understanding of human troubles, and perhaps even some order of > therapeutic intervention. Smith has not attempted to do this (despite > a therapeutic background) but his thesis must surely be of interest to > those involved in the "talking therapies"; indeed, some of Smith's > examples come from therapeutic conversations. By turning an > adaptationist eye to the possible sub-textual social signalling of our > narratives we might begin to recognise patterns of expression that are > indicative of malaise and low-mood. We might also begin to see how > seemingly normal conversations between people, in whatever form of > relationship, are encoding and signalling discontent and frustrations. > Just as we are uncovering the necessary elements of emotional pain, so > we might uncover the ordinary sub-personal signals of everyday > conversation. > > References > > Dickins, T. E. (2004.) Social Constructionism as Cognitive Science. > Journal for the Theory of Social Behaviour, 34 (4), 333-352 > > Wilson, T. D. (2002) Strangers to Ourselves: Discovering the Adaptive > Unconscious. London: The Belknap Press of Harvard University Press. > > Citation > > Dickins, T. E. (2005). A Necessary Pain in the Heart. A Review of Why > We Lie: The Evolutionary Roots of Deception and the Unconscious Mind > by David Livingstone Smith. Evolutionary Psychology, 3:175-178. > > [9]Thomas Dickins > >References > > 9. mailto:t.dickins at uel.ac.uk >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Mon Jul 25 06:30:58 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Mon, 25 Jul 2005 08:30:58 +0200 (GMT+02:00) Subject: [Paleopsych] NYT Op-Ed: Scaring Us Senseless Message-ID: <28253883.1122273058356.JavaMail.root@wamui-thinleaf.atl.sa.earthlink.net> I think the bombings in London were "false flag" operations. Turkey and Egypt as well, and some incidents in Iraq. -----Original Message----- From: Premise Checker Sent: Jul 24, 2005 5:04 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] NYT Op-Ed: Scaring Us Senseless Scaring Us Senseless http://www.nytimes.com/2005/07/24/opinion/24taleb.html By NASSIM NICHOLAS TALEB Glasgow I WAS visiting London last Thursday when a second wave of attacks hit the city, just two weeks after the traumatic events of July 7. It is hard to avoid feeling vulnerable to this invisible enemy who does not play by known or explicit rules. Of course, that is precisely the anxiety that terrorists seek to produce. But its opposite - complacency - is not an option. The truth is that neither human beings nor modern societies are wired to respond rationally to terrorism. Vigilance is easy to muster immediately after an event, but it tends to wane quickly, as the attack vanishes from public discourse. We err twice, first by overreacting right after the disaster, while we are still in shock, and later by under-reacting, when the memory fades and we become so relaxed as to be vulnerable to further attacks. Terrorism exploits three glitches in human nature, all related to the management and perception of unusual events. The first and key among these has been observed over the last two decades by neurobiologists and behavioral scientists, who have debunked a great fallacy that has marred Western thinking since Aristotle and most acutely since the Enlightenment. That is to say that as much as we think of ourselves as rational animals, risk avoidance is not governed by reason, cognition or intellect. Rather, it comes chiefly from our emotional system. Patients with brain lesions that prevent them from registering feelings even when their cognitive and analytical capacities are intact are incapable of effectively getting out of harm's way. It is largely our emotional toolkit, and not what is called "reason," that governs our capacity for self-preservation. Second, this emotional system can be an extremely na?ve statistician, because it was built for a primitive environment with simple dangers. That might work for you the next time you run into a snake or a tiger. But because the emotional system is impressionable and prefers shallow, social and anecdotal information to abstract data, it hinders our ability to cope with the more sophisticated risks that afflict modern life. For example, the death of an acquaintance in a motorcycle accident would be more likely to deter you from riding a motorcycle than would a dispassionate, and undoubtedly far more representative, statistical analysis of motorcycles' dangers. You might avoid Central Park on the basis of a single comment at a cocktail party, rather than bothering to read the freely available crime statistics that provide a more realistic view of the odds that you will be victimized. This primacy of the emotions can distort our decision-making. Travelers at airports irrationally tend to agree to pay more for terrorism insurance than they would for general insurance, which includes terrorism coverage. No doubt the word "terrorism" can be specific enough to evoke an emotional reaction, while the general insurance offer wouldn't awaken the travelers' anxieties in the same way. In the modern age, the news media have the power to amplify such emotional distortions, particularly with their use of images that go directly to the emotional brain. Consider this: Osama bin Laden continued killing Americans and Western Europeans in the aftermath of Sept. 11, though indirectly. How? A large number of travelers chose to drive rather than fly, and this caused a corresponding rise in casualties from automobile accidents (any time we drive more than 20 miles, our risk of death exceeds that of flying). Yet these automobile accidents were not news stories - they are a mere number. We have pictures of those killed by bombs, not those killed on the road. As Stalin supposedly said, "One death is a tragedy; a million is a statistic." Our emotional system responds to the concrete and proximate. Based on anecdotal information, it reacts quickly to remote risks, then rapidly forgets. And so the televised images from bombings in London cause the people of Cleveland to be on heightened alert - but as soon as there is a new tragedy, that vigilance is forgotten. The third human flaw, related to the second, has to do with how we act on our perceptions, and what sorts of behavior we choose to reward. We are moved by sensational images of heroes who leap into action as calamity unfolds before them. But the long, pedestrian slog of prevention is thankless. That is because prevention is nameless and abstract, while a hero's actions are grounded in an easy-to-understand narrative. How can we act on our knowledge of these human flaws in order to make our society safer? The audiovisual media, with their ability to push the public's emotional hot buttons, need to play a more responsible role. Of course it is the news media's job to inform the public about the risk and the incidence of terrorism, but they should try to do so without helping terrorists achieve their objective, which is to terrify. Television images, in all their vividness and specificity, have an extraordinary power to do just that, and to persuade the viewer that a distant risk is clear and present, while a pressing but underreported one is nothing to worry about. Like pharmaceutical companies, the news media should study the side effects of their product, one of which is the distortion of the viewer's mental risk map. Because of the way the brain is built, images and striking narratives may well be necessary to get our attention. But just as it takes a diamond to cut a diamond, the news industry should find ways to use images and stories to bring us closer to the statistical truth. Nassim Nicholas Taleb, who teaches risk management at the University of Massachusetts at Amherst, is the author of "Fooled by Randomness: The Hidden Role of Chance in Life and the Markets." From christian.rauh at uconn.edu Mon Jul 25 14:00:52 2005 From: christian.rauh at uconn.edu (Christian Rauh) Date: Mon, 25 Jul 2005 10:00:52 -0400 Subject: [Paleopsych] NYT Op-Ed: Scaring Us Senseless In-Reply-To: <28253883.1122273058356.JavaMail.root@wamui-thinleaf.atl.sa.earthlink.net> References: <28253883.1122273058356.JavaMail.root@wamui-thinleaf.atl.sa.earthlink.net> Message-ID: <42E4F094.2020600@uconn.edu> What is a "false flag"? Christian shovland at mindspring.com wrote: > I think the bombings in London were "false flag" operations. > > Turkey and Egypt as well, and some incidents in Iraq. > > > > -----Original Message----- > From: Premise Checker > Sent: Jul 24, 2005 5:04 PM > To: paleopsych at paleopsych.org > Subject: [Paleopsych] NYT Op-Ed: Scaring Us Senseless > > Scaring Us Senseless > http://www.nytimes.com/2005/07/24/opinion/24taleb.html > > By NASSIM NICHOLAS TALEB > Glasgow > > I WAS visiting London last Thursday when a second wave of attacks hit > the city, just two weeks after the traumatic events of July 7. It is > hard to avoid feeling vulnerable to this invisible enemy who does not > play by known or explicit rules. Of course, that is precisely the > anxiety that terrorists seek to produce. But its opposite - > complacency - is not an option. > > The truth is that neither human beings nor modern societies are wired > to respond rationally to terrorism. Vigilance is easy to muster > immediately after an event, but it tends to wane quickly, as the > attack vanishes from public discourse. We err twice, first by > overreacting right after the disaster, while we are still in shock, > and later by under-reacting, when the memory fades and we become so > relaxed as to be vulnerable to further attacks. > > Terrorism exploits three glitches in human nature, all related to the > management and perception of unusual events. The first and key among > these has been observed over the last two decades by neurobiologists > and behavioral scientists, who have debunked a great fallacy that has > marred Western thinking since Aristotle and most acutely since the > Enlightenment. > > That is to say that as much as we think of ourselves as rational > animals, risk avoidance is not governed by reason, cognition or > intellect. Rather, it comes chiefly from our emotional system. > > Patients with brain lesions that prevent them from registering > feelings even when their cognitive and analytical capacities are > intact are incapable of effectively getting out of harm's way. It is > largely our emotional toolkit, and not what is called "reason," that > governs our capacity for self-preservation. > > Second, this emotional system can be an extremely na?ve statistician, > because it was built for a primitive environment with simple dangers. > That might work for you the next time you run into a snake or a tiger. > But because the emotional system is impressionable and prefers > shallow, social and anecdotal information to abstract data, it hinders > our ability to cope with the more sophisticated risks that afflict > modern life. > > For example, the death of an acquaintance in a motorcycle accident > would be more likely to deter you from riding a motorcycle than would > a dispassionate, and undoubtedly far more representative, statistical > analysis of motorcycles' dangers. You might avoid Central Park on the > basis of a single comment at a cocktail party, rather than bothering > to read the freely available crime statistics that provide a more > realistic view of the odds that you will be victimized. > > This primacy of the emotions can distort our decision-making. > Travelers at airports irrationally tend to agree to pay more for > terrorism insurance than they would for general insurance, which > includes terrorism coverage. No doubt the word "terrorism" can be > specific enough to evoke an emotional reaction, while the general > insurance offer wouldn't awaken the travelers' anxieties in the same > way. > > In the modern age, the news media have the power to amplify such > emotional distortions, particularly with their use of images that go > directly to the emotional brain. > > Consider this: Osama bin Laden continued killing Americans and Western > Europeans in the aftermath of Sept. 11, though indirectly. How? A > large number of travelers chose to drive rather than fly, and this > caused a corresponding rise in casualties from automobile accidents > (any time we drive more than 20 miles, our risk of death exceeds that > of flying). > > Yet these automobile accidents were not news stories - they are a mere > number. We have pictures of those killed by bombs, not those killed on > the road. As Stalin supposedly said, "One death is a tragedy; a > million is a statistic." > > Our emotional system responds to the concrete and proximate. Based on > anecdotal information, it reacts quickly to remote risks, then rapidly > forgets. And so the televised images from bombings in London cause the > people of Cleveland to be on heightened alert - but as soon as there > is a new tragedy, that vigilance is forgotten. > > The third human flaw, related to the second, has to do with how we act > on our perceptions, and what sorts of behavior we choose to reward. We > are moved by sensational images of heroes who leap into action as > calamity unfolds before them. But the long, pedestrian slog of > prevention is thankless. That is because prevention is nameless and > abstract, while a hero's actions are grounded in an easy-to-understand > narrative. > > How can we act on our knowledge of these human flaws in order to make > our society safer? > > The audiovisual media, with their ability to push the public's > emotional hot buttons, need to play a more responsible role. Of course > it is the news media's job to inform the public about the risk and the > incidence of terrorism, but they should try to do so without helping > terrorists achieve their objective, which is to terrify. > > Television images, in all their vividness and specificity, have an > extraordinary power to do just that, and to persuade the viewer that a > distant risk is clear and present, while a pressing but underreported > one is nothing to worry about. > > Like pharmaceutical companies, the news media should study the side > effects of their product, one of which is the distortion of the > viewer's mental risk map. Because of the way the brain is built, > images and striking narratives may well be necessary to get our > attention. But just as it takes a diamond to cut a diamond, the news > industry should find ways to use images and stories to bring us closer > to the statistical truth. > > Nassim Nicholas Taleb, who teaches risk management at the University > of Massachusetts at Amherst, is the author of "Fooled by Randomness: > The Hidden Role of Chance in Life and the Markets." > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > -- ????????????????????????????????????????????????????????????????????? ~ I G N O R A N C E ~ The trouble with ignorance is precisely that if a person lacks virtue and knowledge, he's perfectly satisfied with the way he is. If a person isn't aware of a lack, he can not desire the thing which he isn't aware of lacking. Symposium (204a), Plato _____________________________________________________________________ ????????????????????????????????????????????????????????????????????? From waluk at earthlink.net Mon Jul 25 15:39:02 2005 From: waluk at earthlink.net (Gerry) Date: Mon, 25 Jul 2005 08:39:02 -0700 Subject: [Paleopsych] NYT Op-Ed: Scaring Us Senseless In-Reply-To: <42E4F094.2020600@uconn.edu> References: <28253883.1122273058356.JavaMail.root@wamui-thinleaf.atl.sa.earthlink.net> <42E4F094.2020600@uconn.edu> Message-ID: <42E50796.60807@earthlink.net> I was also unfamiliar with the term: From Wikipedia: *False flag* operations are covert operations conducted by governments, corporations, or other organizations, which are designed to appear as if they are being carried out by other entities. The name is derived from the military concept of flying false colors ; that is, flying the flag of another country other than your own. This was considered acceptable provided one lowered the false flag and raised the national flag before engaging in battle. Auxiliary cruisers operated in such a fashion in both World Wars. In the most notable example, the German Kormoran raider surprised and sank the Australian light cruiser HMAS Sydney in 1941 , causing the greatest loss of life on an Australian ship ever. Gerry Reinhart-Waller Christian Rauh wrote: > What is a "false flag"? > > Christian > > shovland at mindspring.com wrote: > >> I think the bombings in London were "false flag" operations. >> Turkey and Egypt as well, and some incidents in Iraq. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anonymous_animus at yahoo.com Mon Jul 25 18:55:21 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Mon, 25 Jul 2005 11:55:21 -0700 (PDT) Subject: [Paleopsych] new song In-Reply-To: <200507251800.j6PI0gR01659@tick.javien.com> Message-ID: <20050725185521.58200.qmail@web30801.mail.mud.yahoo.com> I wrote my first song lyrics in a long time, it's a love letter to everyone involved in the "culture wars": Reprise Encapsulated in our cars immunized from the stars we were so united at the start how did we drift so far apart a dubious distinction to have it all undone by our insecure insistence on being number one turn around and face the morning sun listen to the messengers of dawn put away your armor and your scripted litany learn at last to love your enemy the apple of your eye is poison to your soul a good man in a bad crowd is a bird in a black hole the venom that we swallowed is the virus of control how could subdivision make us whole? we hide behind these escalating masks of enmity why is it so painful to agree? we only see our neighbor down the barrel of a gun we'll all go into the abyss as one time plays a game with our hearts an endless war of light and gravity divide the tribes and polarize the family and learn to stand the holy ground of unity no one left to criticize it's only us we demonize turn around and face the morning sun listen to the messengers of dawn put away your armor and your scripted litany learn at last to love your enemy why are we so taken by the proud appealing to the madness of the crowd applauding when they show their snow white fleece and crucify the ones who stand for peace who will choose the ones who have no choice who will stand for those who have no voice why are we still on our knees, paralyzed with fear the ones that we are waiting for are here turn around and face the morning sun listen to the messengers of dawn put away your armor and your scripted litany learn at last to love your enemy Michael Michael C. Lockhart http://www.soulaquarium.net Blog: http://shallowreflections.blogspot.com/ Yahoo Messenger:anonymous_animus "The most dangerous things in the world are immense accumulations of human beings who are manipulated by only a few heads." - Carl Jung "We are stardust, we are golden, We are billion year old carbon, And we've got to get ourselves back to the garden." Joni Mitchell ____________________________________________________ Start your day with Yahoo! - make it your home page http://www.yahoo.com/r/hs From checker at panix.com Tue Jul 26 00:30:07 2005 From: checker at panix.com (Premise Checker) Date: Mon, 25 Jul 2005 20:30:07 -0400 (EDT) Subject: [Paleopsych] NYT: Cigarettes, Taxes and Thin French Women Message-ID: Cigarettes, Taxes and Thin French Women http://www.nytimes.com/2005/07/24/business/yourmoney/24view.html By DANIEL GROSS THE obesity epidemic isn't just a growing health risk; it's also a problem for the economy. The percentage of Americans over 20 who are regarded as obese has more than doubled, to about 30 percent, from about 14 percent in the early 1970's. And the Centers for Disease Control and Prevention says obesity was responsible for 112,000 premature deaths in 2002, and for $75 billion in medical costs in 2003. But as the United States lost the battle against the bulge, it waged a more successful campaign against another menace to public health: smoking. Because of an aggressive public information campaign, new restrictive laws and huge increases in federal and state taxes, the percentage of the population that smoked fell to 22.5 percent in 2002, from 37 percent in 1970. Strange as it may sound at first, many economists and health care experts say they believe that the two trends may be related. Experts blame factors ranging from urban sprawl to junk-food-laden diets for the increase in the number of Americans who are obese - defined as having a body mass index of over 30. But smoking, or the decline of smoking, may also play a role. Nicotine is a stimulant, which means that smokers burn calories faster. And it's an appetite suppressant, which means that smokers eat less. Consider "French Women Don't Get Fat," the best selling book. Some critics said that the real reason chic Parisian women stayed trim while gorging themselves on croissants was that they smoked more than their American counterparts. Indeed, conventional wisdom, soundly rooted in the personal experience of millions of former smokers and in several studies, has long held that short-term weight gain is the price to be paid for quitting smoking. But economists are increasingly applying their tools to measure the way monetary incentives, or disincentives, affect all sorts of human behavior - and hence the ability of government policy to alter it. And they've been wondering whether high cigarette taxes, which are intended to encourage people to quit smoking, may have the unintended effect of redirecting them from one form of unhealthy behavior to another. According to William Orzechowski and Rob Walker, two economic consultants based in Arlington, Va., the price of a pack of cigarettes rose to $3.37 a pack in 2001 from 63 cents in 1980, thanks in large measure to various state and federal tax increases. (Adjusted for inflation, that's a 164 percent gain.) And smokers responded the way any economically rational consumer would, despite the fact that many felt as if they were addicted: they stopped using the product as it became more expensive. Broadly speaking, said Michael Grossman, an economics professor at the Graduate Center of the City University of New York, a 10 percent increase in the price of cigarettes leads to a 5 percent reduction in cigarette consumption. But the fear of gaining weight as a result of quitting may have discouraged some smokers from stopping - and apparently with good reason. In a 2004 study, Professor Grossman, along with Shin-Yi Chou of Lehigh University and Inas Rashad of Georgia State, mined state-by-state behavioral surveys from 1984 to 1999 to get to the root causes of rising obesity. While they found that the prevalence of fast-food restaurants was responsible for most of the climb, they concluded that the decline in smoking accounted for about 20 percent of it. Over all, they found that "each 10 percent increase in the real price of cigarettes produces a 2 percent increase in the number of obese people, other things being equal." Jonathan Gruber, an economist at the Massachusetts Institute of Technology, didn't believe that the relationship between lower smoking and higher obesity rates was so direct. While people may gain weight when they quit smoking, they tend to shed those pounds later. "There's no evidence in the medical literature that quitting smoking will affect your weight over a long period of time," he said. And by themselves, the short-term weight gains associated with smoking shouldn't be enough to push masses of former smokers into obesity. Professor Gruber, with the assistance of Michael Frakes, a Ph.D. student, analyzed the same numbers that Professors Grossman, Chou and Rashad did, but with different methodology. Rather than focusing on the way prices affected consumption, Professor Gruber looked at how people living in different states reacted when state cigarette taxes were sharply increased. He also ignored factors like the number of fast-food restaurants. His method allowed him to isolate the way sudden government-imposed price increases affected consumer behavior. And when he compared the results with obesity figures in the states, he reached a surprising conclusion. "Raising cigarette taxes causes smoking to fall, but it doesn't lead to obesity," Professor Gruber said. "If anything, it was lower." There are a maze of possibilities for this explanation - beyond economics. Perhaps the conventional wisdom is wrong about stopping smoking and gaining weight. Or perhaps people who quit smoking decide at the same time to start exercising more and to watch their weight. Professor Grossman finds the results intriguing, but he is not prepared to embrace them wholeheartedly. "If you come up with a counterintuitive finding like this," he said, "you have to ask if you've gone too far." Professor Gruber says his results need further testing. But if borne out, the findings would add to the evidence in favor of high cigarette taxes. After all, what other act of government has been shown to raise needed government revenue and discourage citizens from engaging in an expensive, unhealthful habit - all while helping them shed a few pounds? Daniel Gross writes the "Moneybox" column for Slate.com. From checker at panix.com Tue Jul 26 00:30:13 2005 From: checker at panix.com (Premise Checker) Date: Mon, 25 Jul 2005 20:30:13 -0400 (EDT) Subject: [Paleopsych] AmSpectator: Tom Bethell: Challenging Conventional Wisdom: Is cancer caused by genes? Message-ID: ---------- Forwarded message ---------- Date: Mon, 25 Jul 2005 09:16:38 -0400 (EDT) From: Premise Checker To: Premise Checker Subject: AmSpectator: Tom Bethell: Challenging Conventional Wisdom: Is cancer caused by genes? Tom Bethell: Challenging Conventional Wisdom: Is cancer caused by genes? The American Spectator, 5.7-8 First, the summary from the "Magazine and Journal Reader" feature of the daily bulletin from the Chronicle of Higher Education, 5.7.25 http://chronicle.com/prm/daily/2005/07/2005072201j.htm A glance at the July/August issue of The American Spectator: The problem with conventional cancer theory Researchers are pursuing the wrong cancer theory and are making one of the "great medical errors of the 20th century," says Tom Bethell, a senior editor at the magazine. But scientists are not solely to blame, he says. It is the National Institutes of Health that's curtailing alternative research. Mr. Bethell bashes the dominant hypothesis that cancer is caused by gene mutations in single cells. That theory, known as the "multiple-hit theory," is not supported by evidence showing that mutated genes cause cancerous cells that continually divide and spread, he says. The truth, he argues, lies in the forgotten theory that the disease is caused by an incorrect number of chromosomes in cancerous cells. The correct theory, the "aneuploid theory," is not accepted by most mainstream researchers or the NIH, writes Mr. Bethell. For a century scientists have known that cancer cells do not have the right number of chromosomes, he says. Such "aneuploid" cells have as many as 80 chromosomes instead of the customary 46. The cells occur when normal cells divide into "daughter" cells but errors in the division result in irregular segregation of the chromosomes. Most aneuploid cells die off immediately, but those that survive create extra DNA in each new-generation cell. The DNA hijacks the cell's control mechanisms, and begins multiplying. The result is a lump of abnormal cells -- a tumor. The NIH maintains its pursuit of gene-mutation research, though, and has not given credit or grants to aneuploid research, although it should, says Mr. Bethell. In the end, he says, funds from the NIH may never go toward research that creates a "real breakthrough." "If so," he says, "we will all have learned a very expensive lesson." --Devin Varsalona -------------------------- ???SCIENTISTS THESE DAYS TEND TO BELIEVE that almost any trait can be attributed to a gene. The gene obsession, showing up in science journals and on the front page of the New York Times, culminated in the Human Genome Project. The human genome was sequenced, then that of the fruit fly, the rat, the mouse, the chimpanzee, the roundworm, yeast, and rice. Computers cranked out their mindless data. It has been a bonanza for techies and the computer industry but the medical benefits have remained elusive. ???Now they are talking about a Cancer Genome Project. It would determine the DNA sequence in 12,500 tumor samples and is supposed to reveal cancer-causing mutations by comparing the order of the letters of the genetic code in tumor cells with sequences in healthy tissue. But there is no single cancer genome, and the project will not improve our understanding of cancer. ???Cancer has proved resistant to every ibreakthroughi and treatment hype, and the new approach will only sustain the error that has dominated cancer research for 30 years. Since the mid-1970s, leading researchers have doggedly pursued the fixed idea that cancer is caused by gene mutations. I believe it will prove to have been one of the great medical errors of the 20th century. WHERE TO BEGIN? One place is a story in the Washington Post, a few months back, headlined iGenetic Test Is Predictor of Breast Cancer Relapse.i The test imarks one of the first tangible benefits of the massive effort to harness genetics to fight cancer,i Rob Stein wrote. No real benefits yet? I think that is correct. Two well-publicized genes supposedly predispose women for breast cancer, but in over 90 percent of cases these genes have shown no defect. ???Genes that (allegedly) cause cancer when they are mutated are called oncogenes. They were reported in 1976 by J. Michael Bishop and Harold Varmus, who were rewarded with the Nobel Prize. Varmus became director of the National Institutes of Health (NIH) under President Clinton; Bishop, chancellor of the University of California in San Francisco, one of the largest medical-research institutions in the country. The two scientists had idiscovered a collection of normal genes that can cause cancer when they go awry,i Gina Kolata later reported in the New York Times. About 40 such genes had been discovered. Normally harmless, ithey would spring into action and cause cancer if they were twitched by carcinogens.i When mutated, in other words. This was ia new era in research.i ???The following week, on October 20, 1989, Science magazine also reported the award. The article claimed: iOthe work of the Bishop-Varmus group has had a major impact on efforts to understand the genetic basis of cancer. Since their 1976 discovery, researchers have identified nearly 50 cellular genes with the potential of becoming oncogenes.i Their work was ialready paying off clinically.i ???And so it went. Researchers began to find more and more of these oncogenes; then itumor suppressor genesi were added. Now, in the Washington Post article, we read that iresearchers sifted through 250 genes that had been identified as playing a role in breast cancer.i ???So, up to 250 genes are iplaying a role.i The Sanger Institute, which was also involved in the human genome project, claimed recently that icurrently more than one percent of all human genes are cancer genes.i The latest figure is 25,000 genes in total for humans, so that is surely where the 250 icancer genesi came from. ???At the beginning, the oncogene theory posited that a single gene, when mutated, turned a normal cell into a cancer cell. We have gone from 1 to 250, the latter iplaying a role.i This imultiplication of entitiesi -- genes -- is the hallmark of a theory that is not working. Itis what philosophers call a ideteriorating paradigm.i The theory gets more and more complex to account for its lack of success. The number of oncogenes keeps going up, even as the total number of genes goes down. Six years ago some thought humans had 150,000 genes in all. Now itis one-sixth that number. How long before they find that all the genes iplay a rolei in cancer? ???IT ALWAYS WAS unlikely that a single mutated gene would turn a cell into a cancer cell. Mutations occur at a predictable rate in the body. As the cells of the body number perhaps trillions we would all have cancer if a single hit was sufficient. Then came the imultiple hiti theory. Three or four, maybe six or seven genes would all have to mutate in the same cell during its lifetime. Then, bingo, your unlucky number had come up. That cell became a cancer cell. When it divided it just kept on and on dividing. ???Meanwhile, the underlying theory never changed. The research establishment remains in thrall to the idea that cancer is caused by gene mutations. It was and is unable to lay its hands on the genes responsible, but it believes they are in there somewhere. ???There are several problems with the theory, but the most basic is this. Researchers have never been able to show that a mutated gene, taken from a cancer cell, will transform normal cells in the petri dish. They are unable to show that the allegedly guilty party is capable of committing the crime. They can transport these mutated genes into test cells. And the supposed deadly genes are integrated into the cellis DNA. But those cells do not turn into cancer cells, and if injected into experimental animals, they donit cause tumors. Thatis when the experts said, well, there must be four or five genes all acting at once in the cell. But they have never been able to say which ones, nor show that in any combination they do the foul deed. ???There is even a genetically engineered strain of mice called OncoMouse. They have some of these oncogenes in every cell of their small bodies. You would have thought they would die of cancer immediately. But they leave the womb, gobble up food, and live long enough to reproduce and pass on their deadly genes to the next generation. ???I have a suggestion for Gina Kolata, who still works on these issues for the New York Times. Why not try asking Varmus or Bishop exactly which genes, either individually or in combination, cause cancer in humans or anything else? I tried calling Bishop at UCSF a few months back but couldnit get through. He will respond to the New York Times, surely. But maybe not with a straight answer. ???The desire to start over with a icancer genome projecti tells you they know they are not even at first base. Dr. Harold Varmus, now president of the Memorial Sloan-Kettering Cancer Center in New York, told the Times in March that the new project could icompletely change how we approach cancer.i ???Completely change? Maybe we do need a complete change. What about his decades-old Nobel work? Was that a waste? In a way I think it was worse than that, because when an erroneous theory is rewarded with the top prize in science, abandoning that theory is difficult. The backtracking required is an embarrassment to all. ???JOURNALISM PLAYS A CRUCIAL ROLE. Especially in the field of medical science, there is a big problem. It exists at all major newspapers and I donit mean to single out the New York Times. Science journalists donit see themselves as qualified to challenge the experts. If a reporter were to do so, quoting non-approved scientists, top-echelon NIH officials would surely complain to editors, and the reporter would be reassigned. The nationis health would be said to be endangered. ???All this contrasts with the far greater freedom that journalists enjoy in the political arena, including defense and foreign policy. About 35 years ago, leading newspaper editors decided to chart their own course and form their own judgments. The context was the Vietnam War, more specifically the Pentagon Papers. A big report critical of U.S. policy was leaked to the press, and the Nixon administration went to great pains to suppress it. National security was invoked, judicial restraining orders were issued, but eventually the ipublicis right to knowi trumped inational security.i The material was published. ???That was the background from which Woodward and Bernstein and the Watergate investigation emerged a year later. And we were the better off for it. The real danger, then and now, was that of unchecked government power. And we are seeing that exercised in the realm of medical science, where we do not have a press that dares to think independently. ???HOW DID THE IDEA TAKE ROOT that gene mutations cause cancer? Well, in the 1920s researchers bombarded fruit flies with X-rays and mutant flies resulted. Humans exposed to large X-ray doses a hundred years ago proved to be at high risk for skin cancer and leukemia. It was convincingly shown that X-rays produced both mutations and cancers. ???Working at the NIH in the 1960s, the biochemist Bruce Ames used bacteria to detect the mutagenic properties of various substances. Some carcinogens proved to be mutagenic, hence the gene-mutation theory of cancer. Robert A. Weinberg, who directs a cancer research lab at MIT, says that by the 1970s he and others had come to believe that iAmes was preaching a great and simple lessoni about carcinogens: iCarcinogens are mutagens." ???Some are, but some of the best known are not. Neither asbestos nor coal tar, found in cigarettes, are mutagenic. They are carcinogens but they donit affect the DNA -- the genes. But there was one more crucial discovery still to be made. Or rather, rediscovery. ???Robert Weinberg later claimed that a mutation in a single gene indeed had transformed a cell in vitro. But it turned out that the cell-line, one that had been provided by the NIH, was already iimmortal,i or cancerous. It did not have the right number of chromosomes. ???Normal cells have 46 chromosomes -- 23 each from mother and father. Such cells are idiploid,i because their complement of chromosomes is doubled. In case you never took biology, genes are segments of DNA strung along the chromosomes. The largest chromosomes, such as Chromosome 1 or 2, include several thousand genes each. Sometimes babies are born with one extra copy of the smallest chromosome, and because it is in the germ line this defect is in every cell of the body. Such babies have Down syndrome. Having an extra chromosome is serious business. ???Here is the key point: cancer cells do not have the correct complement of chromosomes. Their iploidyi is not good, so they are said to be aneuploid. Cancer cells are aneuploid. This defect arises not in the germ line, but in the grown body. Cells divide in the course of life, by a process called mitosis, and sometimes there is an error in the division. The chromosomes do not isegregatei properly (do not end up equally in the two daughter cells) and an extra chromosome may be hauled off into one of the new cells. Such over-burdened cells will usually die, but sometimes the error repeats and magnifies and increases. The cell just keeps on dividing, its control mechanisms overridden by the abundance of extra DNA in the cell. A tumor forms in that part of the body, and that is cancer. Some cancer cells may have as many as 80 chromosomes instead of 46. They may actually have double the right number of genes. ???The aneuploid character of cancer cells is the first thing that Theodor Boveri and others noticed when they began to look at cancer under the microscope, 100 years ago. Leaving unresolved the question of what causes aneuploidy, early researchers thought that this was surely the genetic cause of cancer. Mutation didnit enter into it. But gradually the early research was buried. In the last generation, textbooks on the cell and even textbooks on cancer have failed to mention aneuploidy or its bizarre chromosomal combinations. Weinberg wrote two books on cancer without mentioning aneuploidy. Overlooking what was plainly visible in the microscope, researchers worked for years with those defective, immortalized cell lines, assuming that their extra chromosomes were unimportant. ???An analogy suggests the magnitude of the error. Cells today are compared to factories, so letis think of an automobile plant. A cancer cell is the equivalent of a monster car with (letis say) five wheels, two engines, and no brakes. Start it running and you canit stop the damned thing. Itis hazardous to the community. The CEO wants to know whatis gone wrong so he sends underlings into the factory. There they find that instead of the anticipated 46 assembly lines, there are as many as 80. At the end of the process this weird machine gets bolted together and ploughs its way out the factory door. ???But todayis gene mutation theorist is someone who says: iThatis not it. The extra assembly lines are irrelevant. What is happening is that three or four of the tens of thousands of workers along the assembly lines are not working right!i In the analogy, genes along the chromosomes correspond to workers along the assembly lines. ???Any CEO would fire the lunatic who thought a few errant workers, and not the bizarre factory layout, had caused the mayhem. But in the realm of cancer research, those who do say that are rewarded with fat grants, top posts, and awards. Thatis a measure of what has happened to cancer research. ???I HAVE LEFT THE MOST DRAMATIC PART to the end. The man who rediscovered the old work on chromosomes and cancer and has drawn attention to it ever since, supported by investigations of his own, is none other than Peter Duesberg of U.C. Berkeley. He was already in the dog house at NIH for saying that AIDS is not an infectious disease and that HIV is harmless. All his grants were cut off in retribution. But as a member of the National Academy of Sciences he could still publish in respectable journals. So for the last seven years he has been drawing attention to the cancer matter. The NIH is pursuing the wrong theory, he says. Talk about persona non grata! No more grants for him! (And he has not received any.) ???A researcher at the University of Washington who became controversial at NIH in an unrelated field warned Duesberg that iin the present system of NIH grants, there is no way to succeed.i No matter how much they prate in public about thinking outside the box and rewarding ihigh-riski proposals, ithe reviewers are the same and their self-interest is the same.i In the cancer field, grant proposals are reviewed by, and won by, proponents of the gene mutation theory. ???Wayt Gibbs published a good article about Duesbergis cancer findings in the Scientific American (July 2003). And this response is beginning to emerge in journals like Science: Er, well, thereis nothing new here.O We have always known that aneuploidy is important in cancer. (Yes, but it was forgotten and then buried beneath the paper mountains of new research.) There is a quiet search for a ipoliticali compromise: Canit we say that both gene mutation and aneuploidy iplay a rolei in the genetics of cancer? ???A leading cancer researcher, Bert Vogelstein of Johns Hopkins, told me some time back that iat least 90 percent of human cancers are aneuploid.i More recently, his lab reported that aneuploidy iis consistently shown in virtually all cancers.i A few years ago, Varmus from Sloan-Kettering did answer my e-mail query, writing: iAneuploidy, and other manifestations of chromosomal instability are major manifestations of many cancers and many labs have been working on them.i But, he added: iAny role they play will not diminish the crucial roles of mutant proto-oncogenes and tumor suppressor genes.i ???But why not? Maybe aneuploidy is sufficient. ???At the end of May, Duesberg was invited to speak at NIH. His topic: iAneuploidy and Cancer: From Correlation to Causation.i About 100 people showed up at Building 10. The Genetics branch of the National Cancer Institute (NCI) is interested in aneuploidy, and well aware of the political sensitivities. But I am told that the director of the NCI, Andrew von Eschenbach, a political appointee, is not particularly interested in aneuploidy. He should be, though, because he is a cancer survivor himself and in speeches calls for ieliminating the suffering and death from cancer by 2015.i ???Duesberg challenged the audience to prove him wrong. He is looking for diploid cancer: a solid tumor with the correct complement of chromosomes. He is not much interested in the compromise solutions -- ia bit of both theories.i Prove me wrong, he says. A woman in the audience did suggest cases of tumors that looked diploid, but Duesberg knew the literature here and immediately referred her to a more recent study showing that these tumors, on closer microscopic inspection, proved to be aneuploid. ???Maybe in the end he will show that in order to achieve a real breakthrough, itis important not to be funded by the NIH. If so, we will all have learned a very expensive lesson. Tom Bethell is a senior editor of The American Spectator. From checker at panix.com Tue Jul 26 00:30:27 2005 From: checker at panix.com (Premise Checker) Date: Mon, 25 Jul 2005 20:30:27 -0400 (EDT) Subject: [Paleopsych] First Things: Theology for Physicists Message-ID: Theology for Physicists May 2005: Books in Review http://www.firstthings.com/ftissues/ft0505/reviews/bar.html First st Things 153 (May 2005): 39-42. Science and the Trinity: The Christian Encounter with Reality By John Polkinghorne, Yale University Press. 208 pp. $24. Reviewed by Stephen M. Barr The story of science and religion since the Middle Ages has been one of estrangement rather than conflict. When the Aristotelian synthesis shattered, science and theology drifted apart, becoming at last disconnected universes of discourse. Over the last few decades many theologians and some scientists have attempted a new "dialogue of science and religion" in order to end this estrangement. A leading figure in this dialogue has been John Polkinghorne, a respected theoretical particle physicist at Cambridge University who, in the early 1980s, left scientific research in mid-career to become an Anglican clergyman and devote himself to writing on science and theology. The science-theology dialogue has chiefly dealt with natural theology and such basic issues as the existence of God, the order and intelligibility of the universe, the evidence for design and purpose in nature, and the limitations of a crassly reductionist materialism. It has brought greater understanding and even some agreement among people of diverse backgrounds and concerns, ranging from agnostic seekers to people of traditional faith. And now, according to Polkinghorne, the dialogue is ready for a new stage--where theologically deeper and specifically Christian subjects are addressed. Science and the Trinity: The Christian Encounter with Reality, based on Polkinghorne's 2003 Warfield Lectures at Princeton Theological Seminary, is a contribution to this new stage of engagement. Along the way, Polkinghorne argues that going beyond the basics of theism can make belief more credible to nonbelievers. He makes an analogy with natural science: "Significant scientific advances often begin with the illuminating simplicity of a basic insight,... but they persist and persuade through the detailed and complex explanatory power of subsequent technical development." In the same way, theism is more persuasive in the form of a richly elaborated theological tradition than in the bare abstractions of philosophy. He therefore maintains that the next stage of dialogue is best conducted from within a particular tradition of faith. For him, the tradition of "Trinitarian theology" provides the most persuasive and satisfying "theological thickness." Polkinghorne contrasts his own attitude toward tradition with that of three other prominent "scientist-theologians": Paul Davies, Ian Barbour, and Arthur Peacocke. In order of increasing respect for Christian tradition, Davies represents the "deistic" approach, Barbour the "theistic," Peacocke the "revisionist," and Polkinghorne the "developmental." Polkinghorne'sunderstanding of proper theological "development" owes more to modern liberal Anglicanism than to John Henry Newman. Nevertheless, in spite of what he calls his "flexibility of hermeneutical strategy," Polkinghorne really is quite traditional in many ways. He believes in the Trinity, the virgin birth, the empty tomb, and the post-resurrection appearances of Christ. Although comfortable with modern biblical criticism, he is able to muster a degree of skepticism toward the hyperskeptical approaches of its more extreme practitioners. Polkinghorne also differs from the other scientist-theologians he discusses in his view of the proper relation between theology and science. Davies, Barbour, and Peacocke are all to some degree "assimilationists" who seek "to achieve a greater merging of the two disciplines." Polkinghorne sees a danger in this: Christian theology has its own sources, insights, methods, and internal logic, so that it risks being denatured if "theological concerns become subordinated to the scientific." Still, theology should take account of scientific insights, for these not only raise important questions for theological reflection but can even "motivate the imposition of certain metaphysical constraints" on what could be considered satisfactory answers. Nevertheless, unlike Peacocke, Polkinghorne does not think any "radical revision" of Christian doctrine is required to meet the challenges raised by modern scientific thought. The particular questions addressed in Science and the Trinity concern the role of Scripture, God's relationship to the universe, the nature of what Polkinghorne awkwardly calls "God in Godself," the Eucharist, and eschatology. Polkinghorne's method in addressing these questions is empirical and inductive. "Because I am a theoretical physicist," he writes, "the style of thinking I adopt is a `bottom-up' approach, which seeks to move from experience to understanding." He certainly does not reject the idea of divine revelation, top-down though it may be. But he tends to conceive of revelation as "an experiential encounter," in which "critical episodes" led the people of Israel, later the first Christians, and finally ourselves to "revelatory insights." The Bible is thus a "record of experience," in the interpretation of which "creative freedom" is allowed--while tradition is the sum of the Christian community's ongoing experiences and reflections, to be mined in an eclectic way. In other words, Polkinghorne does not wish to be in "unthinking thrall to the past" or to authority, whether in the form of a book or a Church. And yet, he also wishes to avoid the danger of "loose individualism" or "rampant relativism" that could lead to "willful or fantastic manipulation" of the Word of God. He acknowledges that some "degree of control" must be exercised by "the oversight of a truth-seeking community" and "the sifting and receiving role played by the whole Christian community." Who is this community and how much sifting does it do? "It is comparatively easy," he assures the reader, "for those of us who seek to `operate within the orbit where the bible is interpreted' to recognize each other, even if sometimes we find the other person saying something very different from what we ourselves think." "What we ourselves think" is a typical turn of phrase for Polkinghorne, and it suggests that much individualism remains even after all the sifting has been done. There is a great deal of value in Polkinghorne's reflections on many of the subjects he takes up in this book. On several key points, however, he is every bit as radical a revisionist as Peacocke. This is particularly so with regard to the omniscience, immutability, and simplicity of God. Polkinghorne embraces the fashionable idea that God does not know the future--either because it does not exist to be known, or because God deliberately chooses not to know it. He argues that this view of omniscience makes room for free will, simplifies theodicy, accords with the developmental nature of the world, and makes God's knowledge truer to that which is known (as though things that happen successively must be known successively). These are flimsy arguments upon which to base a revision of doctrine. The first was demolished by St. Augustine, who pointed out that God's knowledge of future acts no more renders them unfree than our own knowledge of past acts renders them unfree. The third and fourth arguments are based on the fallacy that acts of knowing must partake of the qualities of the things known. (It is not true, for example, that one's knowledge of smells is smelly, or that one's knowledge of evil must be evil. No more, it would seem, must knowledge of change be changeable.) But aside from this philosophical flimsiness, what is surprising about Polkinghorne's arguments, given the context in which they are made, is that they are not based on anything science has taught us about the world. In fact, his position is open to the objection that it does not square with what physics has learned about the nature of time. One's stream of consciousness can be divided into past, present, and future, and Newtonian physics projected this tripartite division onto the whole physical universe. But Einstein showed that spatio-temporal relationships are more subtle: there is no absolute meaning to the question of what is happening (or coming into being) "now" throughout the whole universe. And if it is a mistake to project the timeline of our mental states onto the entire universe, it is even less justified to project it onto God, who infinitely transcends the universe. It is equally ill-defined to speak of "the future" or "the past" in some global sense. Furthermore, to correlate God's supposed past, present, and future mental states with what is going on in the world "simultaneously" with them imposes upon the world exactly the one-dimensional temporal structure that physics tells us it does not have. Polkinghorne attempts to preserve the idea of divine immutability by positing within the divine nature both a "temporal pole"and an "eternal pole." He suggests this may help resolve what might be called the empty throne problem--"how one is to understand the continuing providential governance of the universe during the episode of the earthly life of the incarnate God." The traditional answer, according to Polkinghorne, is that since only the Son was incarnate, the other two Persons of the Trinity were still in heaven to mind the store, as it were. He rightly prefers the idea (which he somehow imagines to be Calvinist) that the Word continued "to participate in the governing of creation" during Christ's earthly life. How is this to be conceived? He suggests, with "some trepidation" and "considerable tentativeness," that "it was the temporal pole of the Second Person that became incarnate in Jesus of Nazareth, while the eternal pole continued its timeless participation in the divine essence and governance." Both the supposed empty-throne problem and Polkinghorne's proposed solution must strike anyone who has grasped the main point of the Council of Chalcedon as peculiar. The required distinction is not between two "poles" within the divine nature, but between the divine nature of Christ, which is eternal, and his human nature, which is temporal. Related confusions lead Polkinghorne to abandon the dogma of divine simplicity. "Trinitarian thinking," he writes, "surely indicates a degree of complexity existing eternally within the divine nature." As traditionally understood, the Trinity does not involve a split within the divine nature. Rather, each Person is understood to possess the whole divine nature. In Jesus, St. Paul writes, "the fullness of the Godhead dwelt bodily"--the fullness, not a piece or a pole. Polkinghorne's Trinitarian theology is not the traditional one, but in the end that may matter little. It is not for his Trinitarian speculations that he is justly honored, but for his powerful and very public witness. His life and writings have given eloquent testimony that one may be both a man of science and a man of God. Stephen M. Barr is a theoretical particle physicist at the Bartol Research Institute of the University of Delaware. He is the author of Modern Physics and Ancient Faith (University of Notre Dame Press). From checker at panix.com Tue Jul 26 00:30:37 2005 From: checker at panix.com (Premise Checker) Date: Mon, 25 Jul 2005 20:30:37 -0400 (EDT) Subject: [Paleopsych] New Left Review: Perry Anderson: Arms and Rights: Rawls, Habermas and Bobbio in an Age of War Message-ID: Perry Anderson: Arms and Rights: Rawls, Habermas and Bobbio in an Age of War http://www.newleftreview.net/NLR26501.shtml New Left Review 31, January-February 2005 In an era of serial war, Rawls, Habermas and Bobbio as theorists of a perpetual peace. Jurisprudence and force in three parallel philosophical constructions of the present international order, and the unsettled afterthoughts--American, German, Italian--that accompanied them. In the final decade of the century that has just ended, three of the most distinguished political philosophers of the time turned their attention to the international scene. In the early nineties, each had published what could be seen as a culminating statement of their reflections on the internal life of Western liberal democracies: J?rgen Habermas's Faktizit?t und Geltung (1992), John Rawls's Political Liberalism (1993), and Norberto Bobbio's Destra e Sinistra (1994). There followed, focusing now on external relations between states, Habermas's `Kant's Idea of Perpetual Peace: at Two Hundred Years' Historical Remove' (1995) and `The Postnational Constellation' (1998), and Rawls's Law of Peoples (1999). Bobbio, who had started thinking about international relations much earlier, and anticipated many of their concerns in `Democracy and the International System' (1989), produced more punctual interventions in these years, each arousing major intellectual debates. [1] The apparent alteration in attention of Rawls and Habermas, previously often reproached with lack of concern for global issues, was by contrast striking. In the background to a new set of preoccupations, on the part of all three thinkers, stretched the frieze of world history, as the end of the Cold War brought not pacification of relations between states, but military engagements of a frequency not seen since the sixties, in the Gulf, the Balkans, the Hindu Kush and Mesopotamia. Each philosopher sought to offer proposals appropriate to the time. Of the three, it was Rawls who offered the most systematic outline of a desirable international order. The Law of Peoples extends the modelling devices of A Theory of Justice from a national to a global plane. How is international justice to be realized? Rawls argues that we should imagine an `original position' for the various peoples of the earth parallel to that for individuals within a nation-state. In it, these collective actors choose the ideal conditions of justice from behind a veil of ignorance concealing their own size, resources or strength within the society of nations. The result, he argues, would be a `law of peoples' comparable to the contract between citizens in a modern constitutional state. But whereas the latter is specifically a design for liberal democracies, the scope of the former extends beyond them to societies that cannot be called liberal, yet are orderly and decent, if more hierarchical. The principles of global justice that should govern democratic and decent peoples alike correspond by and large to existing rules of international law, and the Charter of the United Nations, but with two critical corollaries. On the one hand, the Law of Peoples--so deduced from an original position--authorizes military intervention to protect human rights in states that are neither decent nor liberal, whose conduct brands them as outlaws within the society of nations. Regardless of clauses to the contrary in the un Charter, these may be attacked on the grounds of their domestic policies, even if they present no threat to the comity of democratic nations. On the other hand, the Law of Peoples involves no obligation to economic redistribution between states comparable to the requirements of a justice within democratic societies. The Difference Principle, Rawls explains, does not apply between peoples, since the disparities in their wealth are due not to inequality of resources, but principally to contrasts in culture. Each society is essentially responsible for its own economic fate. Better-off peoples have a duty of assistance to those that are historically more burdened by their culture, but this does not extend beyond helping them achieve the sufficiencies needed for a decent hierarchical order. A legal empyreum that conformed to these rules would have every chance of extending the peace that has reigned for more than a century between the world's democracies to all corners of the earth. The Law of Peoples, inspired by the long experience of this silence of arms among liberal societies, configures a `realistic utopia'. Rawls explains at the outset of The Law of Peoples that the basic intention of his work was to offer a contemporary version of Kant's For a Perpetual Peace: A Philosophical Sketch of 1795. Habermas, proceeding from the same inspiration, sought more explicitly to update Kant, reviewing the posthumous fortunes of his scheme on the occasion of its bicentenary and, where necessary, adjusting it to present conditions. War could be abolished, Kant had believed, by the gradual emergence of a federation of republics in Europe, whose peoples would have none of the deadly impulses that drove absolute monarchs continually into battle with each other at the expense of their subjects--the drive for glory or power. Rather, interwoven by trade and enlightened by the exercise of reason, they would naturally banish an activity so destructive of their own lives and happiness. For well over a century, Habermas observes, history rebuffed this prospect. Democratic peoples showed they could be just as bellicose as autocratic princes. Instead of peace-giving trade, there came industrial revolution and class struggle, splitting rather than uniting society. The public sphere became prey to distortion and manipulation with the arrival of modern media. Yet since the close of the Second World War, Kant's vision has come to life again, as his premises have been fulfilled in altered conditions. Statistical research confirms that democracies do not war with each other. Within the oecd, nations have become economically interdependent. The welfare state has pacified class antagonisms. ngos and global summits on population or the environment show that an international public sphere is taking shape. But if Kant's diagnostic has today been vindicated, his institutional scheme for a perpetual peace has proved wanting. For a mere foedus pacificum--conceived by Kant on the model of a treaty between states, from which the partners could voluntarily withdraw--was insufficiently binding. A truly cosmopolitan order required force of law, not mere diplomatic consent. The un Charter, in banning aggressive wars and authorizing measures of collective security to protect peace, and the un Declaration of Human Rights, laid some of the legal bases for one. But in continuing--inconsistently--to proclaim national sovereignty inviolable, the Charter had not advanced decisively beyond Kant's original conception. The transformative step still to be taken was for cosmopolitan law to bypass the nation-state and confer justiciable rights on individuals, to which they could appeal against the state. Such a legal order required force: an armed capacity to override, where necessary, the out-dated prerogatives of national sovereignty. The Security Council was an imperfect instrument of this imperative, since its composition was open to question and its actions were not always even-handed. It would be better if it were closer in model to the Council of Ministers in the European Union, but--in this unlike the latter--with a military force under its command. Nevertheless, the Gulf War was evidence that the un was moving in the right direction. The present age should be seen as one of transition between international law of a traditional kind, regulating relations between states, and a cosmopolitan law establishing individuals as the subjects of universally enforceable rights. Bobbio's starting-point, by contrast, lay in Hobbes. For theorists of natural law, the passage from a state of nature to a civil union required two distinct contracts: the first, an agreement between warring individuals to form an association; the second, to submit to the decisions of an authority in case of disputes among them; a pact of non-aggression, and a pact for pacific settlement of conflicts. For Hobbes, neither were possible in relations between states. For them, peace could never be more than a temporary suspension of war, the inescapable condition of competing sovereign powers. This was an accurate description, Bobbio agreed, of the classical system of international relations, down to the twentieth century. But with the advent of the League of Nations, and then of the United Nations, for the first time a pactum societatis started to take shape between sovereign states. Still lacking, however, was any pactum subiectionis for the resolution of conflicts and the enforcement of rights. Democratic ideals plainly informed the un's Declaration of Human Rights, and the representative equality of its General Assembly. But national sovereignty continued to frustrate the first, and the character of the Security Council to thwart the second. Transactions between the Great Powers still essentially determined the fate of the earth. Yet now these coexisted with another and better framework. If it was wrong to idealize the un, scepticism about it was also misplaced. The new system of international relations it half-embodied had not done away with a much older one; but nor had the latter succeeded in dispatching this more recent version. The two rubbed against each other--one still effective but no longer legitimate, the other legitimate but not yet effective. [2] For what was still missing from the contemporary inter-state system was the juridical figure of the Third--Arbiter, Mediator or Judge--created by any pact of submission, of which Hobbes's Leviathan, governing those who had voluntarily made themselves its subjects, had offered a compelling, if autocratic, intra-state model. Today, the abstract outline of such a Third could acquire democratic form as a cosmopolitan sovereignty based on the consent of states, empowered to enforce universal peace and a catalogue of human rights. The first condition of such a desirable order had already been perceived by Kant. It was the principle of transparency, abolishing the arcana imperii that had always characterized the foreign policies of democracies and tyrannies alike, under the pretext that affairs of state were too complex and delicate to broadcast to the public, and too dangerous to reveal to the enemy. Such secrecy could not but erode democracy itself, as innumerable actions--at home as well as abroad--of the national security services of contemporary states testified. Here a vicious circle was at work. States could only become fully democratic once the international system became transparent, but the system could only become fully transparent once every state was democratic. Yet there were grounds for hope: the number of democracies was increasing, and a certain democratization of diplomacy was visible. As Kant had once seen in general enthusiasm for the French Revolution a `premonitory sign' of the moral progress of humanity, so today universal acceptance of human rights, formal as this still might be, could be read as a portent of a pacified future to come. [3] Maryland, Rhineland, Piedmont The similarity of these constructions, arrived at independently, is all the more notable for the differing profiles of their authors. Biographically, the formative experience of each lay in the Second World War, but these years were lived in sharply contrasting ways. Rawls (1921-2002), who came from a wealthy family in Maryland and originally intended to become a Protestant minister, fought as an infantryman in the New Guinea and Filipino theatres of the Pacific War. The moral crises of the battlefield seem to have affected him deeply, changing a religious into a philosophical vocation. Returning home to pursue an academic career, he became the most widely read political thinker of his time with the publication, in the early seventies, of A Theory of Justice. Although framed entirely abstractly, Rawls's work was at the same time consistently prescriptive, however ambiguous its practical implications might be. His intellectual horizon of reference could be described as quite narrow: principally, Anglo-American moral philosophy from the time of Victoria to the Cold War, and an animating inspiration from Kant. Politically, Rawls described himself as a left liberal, and no doubt voted Democrat. But one of the most striking features of a thinker often admiringly described by colleagues as unworldly, was a complete abstention from any commentary on contemporary public affairs, throughout his life. Eight years younger, Habermas grew up in a small Rhenish town under Hitler. His father joined the Nazi party in 1933, and Habermas himself briefly took part in defensive work with the Hitlerjugend at the end of the war. After discovering the realities of the Third Reich and breaking with Heidegger, who had been his first major influence, Habermas became the major philosophical descendant of the Frankfurt School, absorbing its distinctive transformations of Marx, and then in turn criticizing these in the light of American pragmatism and systems theory. Intellectually heir to the totalizing ambitions of German idealism, scarcely any major philosophical tradition has fallen outside the range of his interests, in which sociology--classical and contemporary--has also occupied a central place. As a political thinker, the pattern of Habermas's writing reverses that of Rawls, whom he has criticized for his inappropriately substantive intentions. His own political theory is purely procedural, abstaining from any programmatic proposals. On the other hand, Habermas has never hesitated to intervene politically on topical issues, adopting public positions on leading disputes of the day in Germany, as a citizen of the left. His Kleine politische Schriften now run to nine volumes, rivalling the number of Sartre's Situations. At the same time, he has never been involved in any political organization, keeping his distance from spd and Greens alike. A generation older, Bobbio (1907-2004) was born into a well-connected family in Turin which, like most of the Italian bourgeoisie, welcomed the March on Rome and Mussolini's dictatorship. After early work on Husserl, he turned to the philosophy of law. In his late twenties, friendship with intellectuals in the anti-fascist resistance led to brief arrest and release in 1935, after which he resumed a university career with a letter of submission to Mussolini, and intervention by an uncle acquainted with a leading hierarch of the regime. By the outbreak of the war he was a member of a clandestine liberal socialist circle, and in 1942 became one of the founders of the Partito d'Azione, the leading force of the independent Left in the Italian Resistance. Active in the Partito d'Azione until 1948, when it faded from the scene, Bobbio became the most eloquent critical interlocutor of Italian Communism during the high Cold War. In 1966, when the long-divided Italian Socialists united again, he joined the reunified party, playing a major role both in its internal discussions and in public debates at large--after 1978, in sharp opposition to Craxi's leadership of the psi. In 1984, on his retirement from the University of Turin, he was made a Senator for life, and in 1992 his name was canvassed as a candidate for President of the Republic. If Bobbio's career was thus a much more intensely political one than that of Habermas, let alone Rawls, as a theorist he was less systematic or original--limitations he was the first to emphasize. Steeped in the philosophy of law, which he taught for most of his life, and taking his primary inspiration from Kelsen's positivism, from the early seventies he occupied a chair of political science. In both fields he displayed a notably richer historical sense of his disciplines than either the American or the German thinker. The most influential of his voluminous writings were concerned with the origins, fate and future of democracy, and its relations with socialism. In these, he drew equally on Constant and Mill, on Weber and Pareto, to confront the legacy of Marx. They are texts that vividly reflect the energy and variety of Italian political culture in the post-war period, thrown into sharp relief against the monochrome landscape of the United States or the Federal Republic. To that extent, Bobbio's thought was the product of a national experience without equivalent elsewhere in the West. But in one critical respect he was also at an angle to his country. From the early sixties onwards, Bobbio was preoccupied with global problems of war and peace that had little, if any, resonance in Italy--a subordinate state within the American security system, with no post-war colonies, and hardly a foreign policy worth speaking of, whose political class and electorate, famously polarized by domestic conflicts, took correspondingly little interest in affairs beyond their borders. Acutely concerned by the dangers of thermonuclear war between East and West, Bobbio devoted a series of his finest essays to inter-state relations in the atomic age, first collected as Il problema della guerra e le vie della pace in 1979, long before either Rawls or Habermas had got around to considering the plane of international politics. Americana Service in America's war to regain the Pacific; a boyhood in Nazi Germany; underground resistance against fascism. It would be surprising if three such distinct experiences were without trace in the work of those who went through them. Rawls and Habermas offer the most clear-cut contrast. From the beginning, there were critics--nearly every one also an admirer--of A Theory of Justice who were puzzled by its tacit assumption, never argued through as such, that the only relevant unit for its imaginary `original position', from which a just social contract could be derived, was the nation-state. How could a Kantian constructivism, deducing its outcome from universal principles, issue into the design merely of a particular community? The categorical imperative had known no territorial boundaries. At the time, the restriction could appear anodyne, since Rawls's two principles of justice, and their lexical order--first, equal rights to political liberty; second, only those socio-economic inequalities of benefit to all--presupposed conditions common to the wealthy capitalist countries of the West, with which his commentators were also essentially concerned. With the publication of Political Liberalism, however, the extent to which Rawls's preoccupations centred on just one--highly atypical--nation-state, his own, became clear. The whole problematic of this sequel, still couched in general terms, but now referring with diminishing compunction to strictly American questions or obsessions, revolved around the permissible role of religion in political life: an issue of small relevance in any major advanced society other than the United States. In the background, standard patriotic landmarks--the Declaration of Independence, the Bill of Rights, the Supreme Court, Lincoln's Inaugurals, the New Deal--demarcate the space of reflection. Moving into less familiar terrain, The Law of Peoples unfolds the logic of such introversion. Given that in A Theory of Justice it is the rational choice of individuals that is modelled in the original position, why does the same procedure not obtain for the law of peoples? Rawls's most impressive pupil, Thomas Pogge, deploring the conservative drift of his later work, has sought to extend its radical starting-point in just the way Rawls refuses, offering a vision of `global justice' based on the application of the Difference Principle to all human beings, rather than simply the citizens of certain states. [4] The reason why Rawls declined this amplification goes to the unspoken core of his theory. For individuals in the original position to reach unanimous agreement on the two principles of justice, Rawls had to endow them with a range of information and a set of attitudes derived from the very liberal democracies that the original position was supposed to generate--its veil of ignorance screening the fortunes of each individual in the social order to be chosen, but not collective awareness of its typical institutions. In The Law of Peoples, this circular knowledge resurfaces as the `political culture' of a liberal society. But just because such a culture inevitably varies from nation to nation, the route to any simple universalization of the principles of justice is barred. States, not individuals, have to be contracting parties at a global level, since there is no commonality between the political cultures that inspire the citizens of each. More than this: it is precisely the differences between political cultures which explain the socio-economic inequality that divides them. `The causes of the wealth of a people and the forms it takes lie in their political culture and in the religious, philosophical and moral traditions that support the basic structure of their political institutions'. [5] Prosperous nations owe their success to the diligence fostered by industrious traditions; lacking the same, laggards have only themselves to blame if they are less prosperous. Thus Rawls, while insisting that there is a right to emigration from `burdened' societies, rejects any comparable right to immigration into liberal societies, since that would only reward the feckless, who cannot look after their own property. Such peoples `cannot make up for their irresponsibility in caring for their land and its natural resources', he argues, `by migrating into other people's territory without their consent'. [6] Decorating the cover of the work that contains these reflections is a blurred representation, swathed in a pale nimbus of gold, of a statue of Abraham Lincoln. The nationalist icon is appropriate. That the United States owes its own existence to the violent dispossession of native peoples on just the grounds--their inability to make `responsible' use of its land or resources--alleged by Rawls for refusal of redistribution of opportunity or wealth beyond its borders today, never seems to have occurred to him. The Founders who presided over these clearances, and those who followed, are accorded a customary reverence in his late writings. Lincoln, however, held a special position in his pantheon, as The Law of Peoples--where he is hailed as an exemplar of the `wisdom, strength and courage' of statesmen who, unlike Bismarck, `guide their people in turbulent and dangerous times'--makes clear, and colleagues have since testified. [7] The abolition of slavery clearly loomed large in Rawls's admiration for him. Maryland was one of the slave states that rallied to the North at the outbreak of the Civil War, and it would still have been highly segegrated in Rawls's youth. But Lincoln, of course, did not fight the Civil War to free slaves, whose emancipation was an instrumental by-blow of the struggle. He waged it to preserve the Union, a standard nationalist objective. The cost in lives of securing the territorial integrity of the nation--600,000 dead--was far higher than all Bismarck's wars combined. A generation later, emancipation was achieved in Brazil with scarcely any bloodshed. Official histories, rather than philosophers, exist to furnish mystiques of those who forged the nation. Rawls's style of patriotism sets him apart from Kant. The Law of Peoples, as he explained, is not a cosmopolitan view. [8] A transcendental union Habermas offers the antipodal case. In post-war Germany, reaction against the cult of the nation was stronger in his generation, which had personal memories of the Third Reich, than anywhere else in the West. Division of the country during the Cold War compounded it. Here there was little chance of taking the nation-state simply as an unspoken given of political reflection. For Habermas, the question was the opposite: what place could there be for the nation as a contingent community, whose frontiers were delimited by arms and accidents, within the necessary structure of liberal democracy? Since the Rechtsstaat embodies universal principles, how can it abide a particularistic core? Habermas offers two reasons, one theoretical and the other empirical. So far as the first is concerned, he observes that `there is a conceptual gap in the legal construction of the constitutional state, that it is tempting to fill with a naturalistic conception of the people'--for `one cannot explain in purely normative terms how the universe of those who come together to regulate their common life by means of positive law should be composed'. [9] As for the second, in historical practice the ideals of popular sovereignty and human rights were too abstract to arouse the energies needed to bring modern democracy into being. Ties of blood and language supplied the extra momentum for the mobilization required, in which the nation became an emotional driving force akin to religion, as `a remnant of transcendence in the constitutional state'. [10] Nationalism then bred imperialism far into the twentieth century, sublimating class conflicts into wars of overseas conquest and external expansion. Today, however, two broad forces are weakening the political grip of the nation-state. On the one hand, globalization of financial and commodity markets are undermining the capacity of the state to steer socio-economic life: neither tariff walls nor welfare arrangements are of much avail against their pressure. On the other, increasing immigration and the rise of multi-culturalism are dissolving the ethnic homogeneity of the nation. For Habermas, there are grave risks in this two-sided process, as traditional life-worlds, with their own ethical codes and social protections, face disintegration. To avert these dangers, he argued, a contemporary equivalent of the social response to classical laissez-faire that Polanyi had traced in The Great Transformation was needed--a second remedial `closure' of what had become a new, `liberally expanded', modernity. [11] The European Union offered the model of what such a post-national constellation might look like, in which the powers and protections of different nation-states were transmitted upwards to a supra-national sovereignty that no longer required any common ethnic or linguistic substratum, but derived its legitimacy solely from universalist political norms and the supply of social services. It is the combination of these that defines a set of European values, learnt from painful historical experience, which can offer a moral compass to the Union. [12] Such a European federation, marking as it would a historic advance beyond the narrow framework of the nation-state, should in turn assume its place within a worldwide community of shared risk. For `the great, historically momentous dynamic of abstraction from the local to dynastic, to national to democratic consciousness' can take one more step forward. [13] World government remains impossible, but a world domestic policy does not. Since political participation and the expression of popular will, as Habermas puts it, are today no longer the predominant bases of democratic legitimacy, there is no reason to demand a planetary suffrage or representative assembly. The `general accessibility of a deliberative process whose structure grounds an expectation of rational results' is now more significant and, in such forms as a role for ngos in international negotiations, may largely suffice for the necessary progress. For a cosmopolitan democracy cannot reproduce the civic solidarity or welfare-state policies of the European Union on a global scale. Its `entire normative framework' should consist simply of the protection of human rights--that is, `legal norms with an exclusively moral content'. [14] Beyond the obvious contrast in their valuations of the nation, a wider difference of outlook is noticeable in Rawls and Habermas here. Habermas's vision of the requirements of the time is more sociologically informed, offering a general account of objective changes in the contemporary world. Rawls, lacking such sociological imagination, appears--as Pogge notes--to have been blind to the implications of globalized capital markets for his account of the moral qualities that distinguish peoples in the tending of their natural assets. This is not a mistake Habermas could have made. On the other hand, unlike Rawls, here too he eschews any specific proposal for economic relations between rich and poor zones of the earth, even of the limitative sort advanced in The Law of Peoples. All that the community of shared risk involves is international enforcement of human rights. Here the two thinkers return to each other. For both, human rights are the global trampoline for vaulting over the barriers of national sovereignty, in the name of a better future. Consensus of religion How are these prerogatives derived in the two philosophies? In A Theory of Justice, they are an unproblematic deduction from the device of the original position, as rights that hypothetical individuals would rationally select, inter alia, behind the veil of ignorance. This was an elegant solution, that avoided determination of the status of rights claimed in the real world. By the time of Political Liberalism, concerned to construct an overlapping consensus from a variety of existing ideological standpoints--so inevitably requiring more empirical reference--it was no longer sufficient. To show that such a consensus would comprise his principles of justice, Rawls was now obliged to argue that all major religions contained moral codes compatible with them. In The Law of Peoples, the two lines of argument merge. Universal human rights are deducible from the choice that variant peoples, endowed as they are with differing faiths, would make if assembled together in an original position. Since they form a narrower set than the full range of liberal rights, decent as well as democratic societies will select them; symptomatically, Rawls's examples of the former are consistently Muslim. Lacking a counter-factual artifice to derive them, Habermas is compelled to express a clearer view of human rights as they are actually invoked in the political world. Noting `a certain philosophical embarrassment' surrounding them, he concedes that they cannot be taken as moral rights inherent in each human being, since they are `juridical by their very nature'--that is, can exist only as determinations of positive law. Yet they are also `suprapositive', for their justification--unlike that of other legal norms--can be exclusively moral, requiring no further arguments in support of them. [15] What then is the morality that legitimates them? Here Habermas directly rejoins Rawls. `Does the claim to universality that we connect with human rights merely conceal a particularly subtle and deceitful instrument of Western domination?', he asks, `or do the universal world religions converge on them in a core repertoire of moral intuitions?' There are no prizes for guessing the answer. `I am convinced Rawls is right, that the basic content of the moral principles embodied in international law is in harmony with the normative substance of the great world-historical prophetic doctrines and metaphysical world-views'. [16] Habermas's more sociological side, however, which remembers Weber, cannot let the matter rest there. After all, surely the doctrine of human rights is specifically Western in origin, rather than of pan-confessional inspiration? Adjusting his sights, Habermas meets this objection by explaining that `human rights stem less from the particular cultural background of Western civilization than from the attempt to answer specific challenges posed by a social modernity that has in the meantime covered the globe'. [17] How, in that case, is it that the social challenges of modernity happen to coincide with the moral intuitions of antiquity--the Atomic and Axial ages unexpectedly melting into each other in the eloquence of un prose? Habermas has a proviso ready to square this circle. The faiths that so harmoniously agree with each other, and with lay wisdom, are not `fundamentalist', but aware that their own `religious truths must be brought into conformity with publicly recognized secular knowledge', and so, `like Christianity since the Reformation', are `transformed into "reasonably comprehensive doctrines" under the reflexive pressure generated by modern life circumstances'. [18] With this gloss, the vacancy of the claim that human rights are validated by all world religions is laid bare. The slightest acquaintance with the Pentateuch, Revelations, the Koran or the Bhagavadgita--replete with every kind of injunction to persecution and massacre--is enough to show how absurd such an anachronistic notion must be. All that is really postulated by Rawls and Habermas is that, once religious beliefs are rendered indistinguishable from `public reason' or `secular knowledge', they can be enlisted like any other platitude as sponsors of whatever conventional wisdom requires. The fact that in the real world, transcendent faiths continue to represent contradictory ethical imperatives, waging ideological or literal war with each other, becomes an irrelevant residue: the domain of a `fundamentalism' that is no longer even quite religion, properly understood. In Habermas's construction, something similar occurs to democracy. Once this is redefined as principally a matter of `communication' and `consciousness', political participation and popular will become residuals that can be bypassed in the design of a cosmopolitan legal order. Here too, the presiding concept ensures the desirable outcome--Habermas's discourse theory functioning, like Rawls's public reason, to neutralize democracy as once religion. For rather than a critique of the involution of classic democratic ideals in the dispersed and depoliticized representative systems of the West today, Habermas furnishes a metaphysical justification of it, in the name of the salutarily impersonal and decentred flux of communicative reason. The result is a political theory tailor-made for the further dissolution of popular sovereignty at a European level, and its vaporization altogether at a putative global level. To his credit, when writing on the actual European Union before his eyes, Habermas has sought to resist the logic of his own weakening of any idea of collective self-determination--calling, indeed, for more powers to the European parliament and the formation of European parties. But when, untempered by any comparable experience, he envisages a cosmopolitan order to come, the logic of his projection ends in a political wraith: democracy without democracy, shorn even of elections or voters. Hiroshima's minatory shadow The intellectual framework of Bobbio's prospectus stands apart from these two. The reason for that is its quite distinct historical starting-point. Rawls and Habermas were moved to reflections on the inter-state system only with the end of the Cold War. Their theories are plainly responses to the new world order announced in the wake of the Gulf War. By contrast Bobbio's concerns, predating theirs by three decades, were a product of the Cold War itself. The dangers of a nuclear exchange were all but completely absent from the analytics of either the American or the German. But it was these which determined the Italian's approach to the international scene. The lesson of Carlo Cattaneo in the time of the Risorgimento, and of his teacher Aldo Capitini in the Resistance, had been that the elimination of violence as a means of resolving conflicts, represented by the procedures of democracy within states, required a structural complement between states. Liberty and peace, whatever the empirical gaps or torsions between them, logically belonged together. In the late eighteenth and mid-nineteenth centuries, a considerable range of thinkers had believed that history was in the process of delivering their union. Kant or Mazzini were confident that the spread of republican governments would do away with war. Saint-Simon, Comte and Spencer thought that industrial society would make military conflict an anachronism. Cobden expected the growth of trade to ensure amity between nations. Bebel and Jaur?s were sure socialism would bring lasting peace between peoples. All of these hopes, plausible as they seemed at the time, were dashed in the twentieth century. The barriers against mutual slaughter to which they had looked proved to be made of clay. Merchants did not replace warriors; peoples proved as truculent as princes; communist states attacked each other. [19] Yet now that nuclear annihilation threatened humanity, peace was a universal imperative as it had never been before. Bobbio had no time for Cold War orthodoxy. Deterrence theory was self-contradictory, purporting to prevent the risk of atomic war by the very weapons that created it. The balance of terror was inherently unstable, preordained to escalation rather than equilibrium. [20] Disarmament treaties were welcome if secured, but did not constitute either a radical or a reliable alternative. Moral solutions to the problem of war, however noble, were not more satisfactory than such instrumental ones, since they required an improbable transformation of humanity. The most credible path for putting an end to the nuclear arms race was institutional. If the roots of war lay in the system of states, logically two remedies were possible. If conflicts were generated by the structure of international relations, a juridical solution was indicated; if their causes lay in the internal character of the states making up the system, the solution would have to be social. In the first case, peace could be secured only by the creation of a super-state, endowed with a global monopoly of violence, capable of enforcing a uniform legal order across the world. In the second, it could come only by a transition to socialism, leading to a universal withering away of the state itself. A single Hobbesian sovereignty, or a Marxist Sprung in die Freiheit: such was the choice. [21] Without claiming that this meant the elimination of coercion, since by definition the state was always a concentration of violence, Bobbio held the sole realistic prospect for global peace to be Hobbesian. The menace of a nuclear conflagration could be laid to rest only by a universal state. Structurally, that could become a super-despotism, such as Kant had feared. [22] But, unlike Rawls or Habermas, Bobbio was prepared to contemplate this risk, because it was less than the danger of planetary destruction they ignored. Once the Cold War was over, Bobbio became more concerned to furnish his Hobbesian framework with a Lockean foundation, by stressing the need for a democratic, rather than authoritarian, incarnation of the Absent Third--one always preferable, but now that the Soviet bloc had collapsed, increasingly possible. Nevertheless, the world government he advocated remained a much more centralized structure than Rawls's law of peoples or Habermas's cosmopolitan consciousness, and involved less idealization of its conditions. Even adjusted to post-Cold War circumstances, the link of any such authority to democracy was logically weaker, since its primary legitimation was pacification of inter-state relations rather than a mimesis of intra-state norms--not devices like the original position or discourse theory replicated at international level, but a supervening logic at that level itself, in keeping with Bobbio's dictum, unthinkable for the other two, that `it cannot escape anyone who views history without illusions that relations between rulers and ruled are dominated by the primacy of foreign over domestic policies'. [23] Swords and paper So too human rights, though they eventually played a role in Bobbio's prescriptions for a peaceful international order very similar to their position in the agendas of Rawls and Habermas, were always seen in a quite different light. At no point does Bobbio suggest that they magically harmonize the moral intuitions of the world's great religions, or can be regarded as principles of natural law, or are general requirements of modernity. They were not less precious to him for that. But a realistic view of them is incompatible with their standard descriptions. There are no `fundamental' natural rights, since what seems basic is always determined by a given epoch or civilization. Since they were first proclaimed, the list of human rights has typically been ill-defined, variable and often contradictory. Such rights continually conflict with each other: private property with civic equality, freedom of choice with universal education, and more. Since ultimate values are antinomic, rights appealing to them are inevitably inconsistent. No historical synthesis between liberal and socialist conceptions has yet been realized. Thus human rights lack any philosophical foundation. Their only warrant is factual: today, all governments pay formal homage to the un Declaration of Human Rights. This empirical consensus gives them a contingent universality that is their real basis. [24] Bobbio's account of human rights is thus a far cry from the deontological versions of Rawls or Habermas. It is radically historical. For Hobbes, the only right was to life itself--the individual could refuse to lay it down for the state. Since Hobbes's time, the list of rights claimed by citizens has been progressively extended: at first comprising liberties from the state, then liberties in the state, and eventually liberties through the state. The right to national self-determination, vehemently rejected by Habermas, belonged to these conquests. There was no end in sight to the dynamic of an `Age of Rights'--today, rights to truthful information and to participation in economic power were on the agenda. But theoretical declamation was one thing; practical observance another. The new global ethos of human rights was resplendent only in solemn official declarations and learned commentaries. The reality was `their systematic violation in virtually all countries of the world (perhaps we could say all countries, without fear of error), in relations between the powerful and the weak, the rich and the poor, the knowing and the uninstructed'. [25] Law, in turn, could not be viewed in the starry-eyed fashion of Habermas or Rawls. Wars and revolution--the exercise of external and internal violence--were often the source of legal codes. Legitimacy was typically conferred by victory, not the other way around. Once in place, laws could be compared to a damming or canalization of the powers of existing social groups. When the dykes break, an extraordinary law-making power tumbles forth, creating a new legitimacy: ex facto oritur jus. `Law cannot dispense with the use of force and is always founded in the last instance on the right of those who are strongest, which only sometimes, and contingently, coincides with the right of those who are most just'. [26] We are a long way from the premises of a Habermasian jurisprudence. Bobbio, though his accents could alter, never wavered from a basic fidelity to Hobbes's maxim: auctoritas sed non veritas facit legem. The un should be vested with powers to enforce the human rights it proclaimed. But the gap between its promises and performance remained wide. It had not secured the peace or friendship between nations that its Charter had invoked. Its main achievement to date was never envisaged by its founders--the impetus given by the General Assembly in December 1960 to decolonization, the greatest single progress of political emancipation in the second half of the twentieth century. [27] Like Habermas, Bobbio proposed no determinate programme for reduction of social inequalities on a global scale. But the strength of his feeling about these set him apart too. The real problem of the time, which the nuclear arms race prevented any of the rich nations from addressing, was death by famine in the poor countries of the South. [28] War on outlaws If such were the principal differences of theoretical prospectus, what of the political responses of the three thinkers to the new landscape of violence after the Cold War? Rawls, coherent with the silence of a lifetime, made no comment on the guerres en cha?ne of the nineties. But the logic of a sanction for them is written on every other page of The Law of Peoples. There the philosopher of justice not only offers a blank cheque for military interventions to protect human rights, without even specifying what authority, other than `democratic peoples' at large, is empowered to decide them. He even exceeds State Department jargon with his talk of `outlaw' states--a term inviting law-abiding nations to dispatch them still more swiftly than merely `rogue' ones. The political assumptions at work in such language can be found in such historical illustrations as the book offers. Although Rawls mentions no contemporary political events, he touches on enough past ones to reveal, in this area, a disconcertingly uncritical mind. The slaughter of the First World War was inevitable, because `no self-respecting liberal people' could have accepted German demands on France in 1914. [29] The fire-bombing of Hamburg was justified in the Second World War, if not that of Dresden. Though the destruction of Japanese cities, culminating in Hiroshima and Nagasaki, was a great wrong, it represented simply a `failure of statesmanship' on the part of Truman, who otherwise--loyalty oaths and suborning of the un presumably to witness--was `in many ways a good, at times a very good president'. [30] An excellent guide to just wars is provided by a work explaining why Israel's pre-emptive strike of 1967 was one. [31] Outlaw societies at one time included Habsburg Spain and Bourbon or Napoleonic France--but not Hanoverian or Victorian England, let alone Gilded Age America. Such miscreants are `unsatisfied' powers. Nuclear weapons are essential to keep their modern counterparts in check. [32] Even Rawls's coinage of the notion of `decent', as distinct from democratic, peoples simply shadows the geography of the us security system. The imaginary Muslim society of `Kazanistan' that Rawls conjures up to illustrate the notion can be read as an idealized version of Kuwait or Saudi Arabia--reliable clients whose traditional, if less than liberal, political systems are to be respected, while outlaws in their neighbourhood are removed. Equipped with such credentials, Operation Desert Storm might well be described as the Law of Peoples in real time. Habermas was more explicit. The allied campaign to punish Iraq's brazen violation of international law in seizing Kuwait was an important step forward in the creation of a global public sphere. Although it was not fought under un command, and was unaccountable to the Security Council, it invoked the un and this was better than nothing: `for the first time the United States and its allies were offered the objective possibility of temporarily assuming the (presumably neutral) role of police force to the United Nations'. Admittedly, the result was a hybrid action, since power-political calculations were not absent from its execution; but it was now plain that `the enforcement of international law has to be carried out by an organized co-operation of the international community, not by some utopian (in the worst sense of the word) world government'. Moreover, and perhaps most importantly, the Gulf War was justified not merely by Iraq's annexation of Kuwait, but its menace to Israel, which posed `the nightmare scenario of an Israel encircled by the entire Arab world and threatened with the most horrific kinds of weapons'. [33] Since violations of international law had never hitherto troubled Habermas overmuch--when Turkey invaded Cyprus, Indonesia annexed East Timor, let alone Israel seized East Jerusalem and occupied the West Bank, there is no record of his being moved to comment on them--it seems clear that political feelings rather than legal arguments were the principal pressure behind Habermas's endorsement of Desert Storm. On the one hand, there was his self-declared, long-standing posture of loyalty to the West. For forty years he had held that Germany could only be purged of its malign past, and put all suspect notions of a Sonderweg behind it, by an `unconditional orientation' to the West. This had been Adenauer's great achievement, which as a young man he had failed to understand, and which must remain the pole-star of the Federal Republic. After 1945, it was this orientation that had given Germans `an upright posture'. [34] But there was also, after the Final Solution, and crucially, the special responsibility of Germany to Israel--a vulnerable democracy `still obliged to act as an outpost of the Western world' in the Middle East. Since the founding of the Federal Republic, Habermas notes approvingly, `solidarity with Israel has been an unwritten law of German foreign policy'; only anti-Semites could question it. [35] In the mixture of motivations for Habermas's support of the Gulf War, this was probably the most powerful. Scruples Not a few admirers of Habermas, in Germany and outside it, were taken aback by this philosophical theorization of a war fought, on the admission of the us administration, essentially over the control of oil-wells. Signs of an uneasy conscience could be detected in Habermas himself, who was quick to express reservations about the military tactics employed to win the war, and even to concede that the claim to un legitimacy for it `served largely as a pretext'. [36] But such qualifications, calculated to disarm critics, only underline the crudity of his subsequent conclusion, sweeping principles away in the name of deeds. Dismissing the objection that negotiations for a peaceful resolution of the conflict had scarcely been exhausted, Habermas declared in the spirit of a saloon-bar Realpolitik: `It is a little academic to subject an event of such brutality to a pedantically normative assessment after the fact.' [37] The rhetorical movement of Bobbio's response to the Gulf War was uncannily similar. Operation Desert Storm, Bobbio explained as it rolled into action, was a just war of legitimate defence against aggression. Saddam Hussein, bidding to become a future emperor of Islam, was a great international danger. A sanguinary dictator at home, and an expansionist warlord abroad, he would multiply aggressions to the end of his days, if he were not checked now. Like Hitler, he was bent on extending the theatre of conflict ever further, as his raining of rockets on Israel showed. [38] Bobbio's position caused more of an uproar than Habermas's, in part because there was still a much stronger Left in Italy than in Germany, but also because he himself had been such an eloquent voice against the bellicosity of the Cold War. Criticism from friends and pupils, shocked by his apparent volte-face, came thick and fast. In the face of this, Bobbio too, having approved the principle of the war, took his distance from the practice of it. `I readily acknowledge that in the course of the fighting the relationship between the international organism and the conduct of the war has become ever more evanescent, with the result that the present conflict more and more resembles a traditional war, except for the disproportion in strength between the two combatants. Has a great historical opportunity been lost?', he asked after five weeks of uninterrupted American bombing. Looking around him, he confessed `our conscience is disturbed'. The war was just, but--a separate question--was it obligatory? If so, did it have to be fought in this way? Bobbio's reply was taxative. Just as with Habermas, it served no purpose to scruple after the fact. `Any answer to such questions comes too late to change the course of events. Not only would it be irrelevant--"what is done, is done"--but it could appear downright naive, for no-one is in a position to say what would have happened if another path had been chosen to reach the same goal'. [39] The war might not have been necessary, or so bloody. But it was now an accomplished fact. What point was there in quarrelling with it? nato's moral order Eight years later, Habermas greeted Operation Allied Force with more emphatic applause. nato's attack on Yugoslavia was necessary to stop the crimes against humanity of the Milosevic? regime--`300,000 persons subjected to murder, terror and expulsion', before their rescue by American air-strikes began. There was no basis for casting suspicion on the motives of this intervention, from which the United States stood to gain little. It was a humanitarian war that, even if it lacked a un mandate, had the `tacit authorization of the international community'. The participation of the Bundeswehr in the attack was the decision of a Red-Green coalition that was the first German government ever to be committed to a cosmopolitan legal order in the spirit of Kant and Kelsen. It expressed a public mood in the Federal Republic which was reassuringly similar to that in the rest of Western Europe. There might be some disagreements between the continental Europeans and the Anglo-Saxons on the importance of consulting the un Secretary-General or squaring Russia. But `after the failure of negotiations at Rambouillet', the us and member states of the eu proceeded from a common position. [40] It was true, of course, that since human rights were only weakly institutionalized at the international level, `the boundary between law and morality may blur, as in the present case'. Once authorization from the Security Council was denied, nato could `only appeal to the moral validity of international law'. But that did not mean Carl Schmitt's critique of the moralization of inter-state relations, as fatally radicalizing conflicts between them, applied. Rather, humanitarian interventions like the bombing of Yugoslavia were forced to anticipate the future cosmopolitan order they sought to create. Here there was a distinction between Washington and most European capitals. For the us, global enforcement of human rights supplied a moral compass for national goals. To that fruitful union of idealism and pragmatism, going back to Wilson and Roosevelt, Germans owed their own liberation, and it continued to be as vital as ever. `The us has assumed the tasks of keeping order that are incumbent on a superpower in a world of states that is only weakly regulated by the un'. But the moral imperatives it acted on needed to be institutionalized as legal norms with binding international force. Happily, the un was on the road to closing the gap between them, even if the transition between power politics and an emergent cosmopolitan order still involved a common learning process. [41] In the Balkans as in the Gulf, Habermas was careful to season his plea for war with provisos of conscience. On the one hand, collateral damage to the civilian population of Yugoslavia created a sense of disquiet: were the brutal military means used to rescue the Kosovars always proportionate to the compassionate end? There was reason to doubt it. On the other hand, what would happen if Operation Allied Force henceforth provided the model for humanitarian interventions at large? The West had been obliged to bypass the un in this case: but that should remain an exception. `nato's self-authorization cannot be permitted to become a matter of routine'. [42] With this, ironically--in an essay whose title is taken from Schmitt's lapidary dictum `humanity, bestiality', and is devoted to refuting it--Habermas ended by innocently illustrating the very theory of law he wished to refute. `Sovereign is he who decides the exception', runs the famous opening sentence of Political Theology. Not norms, but decisions, argued Schmitt, were the basis of any legal order. `The rule proves nothing, the exception proves everything. It confirms not only the rule but also its existence, which derives only from the exception'. [43] Kant or Kelsen, invoked by Habermas at the outset, offered no affidavits for America's war in the Balkans. To justify it, he unwittingly found himself driven to reproduce Schmitt. For sovereign, in effect, was the superpower that delivered the ultimatum of Rambouillet designed to furnish the occasion for war, and disseminated the myth of a hundred thousand dead to motivate it; and sovereign the philosopher who now explained that the exception anticipated the rule of the future. Unlike Habermas, Bobbio had admired and corresponded with Schmitt. But in justifying the Balkan War, he had a greater authority in mind. Milosevic? was a tyrant like Saddam, who needed to be wiped off the face of the earth: nato's attack on him should be regarded as a police action rather than an international war, and its means be proportional to its ends. It made no sense to speak any longer of just or unjust wars: all that could be asked was whether a war was legal or not, and effective or not. But today another kind of warrant existed. For as a superpower the United States had acquired a kind of `absolute right that puts it completely outside the constituted international order'. In practice, America had no need of legal justification for its wars, for its record in defending democracy in the three decisive battles of the twentieth century--the First World War, the Second World War and the Cold War--gave its de facto pre-eminence an ethical legitimacy. Europeans owed their freedom to the United States, and with it an unconditional gratitude. Wilson, Roosevelt and Reagan had fought the good cause, defeating the Central Powers, Fascism and Communism, and so making possible the normal democratic world we now live in. Hegel's Philosophy of Right had understood such a role. In every period of history, one nation is dominant, and possesses an `absolute right as bearer of the present stage of the world spirit's development', leaving other nations without rights in face of it. [44] This far-reaching accolade was, once again, not without troubled after-thoughts; which were, once again, quieted with a further reassuring reflection. After seven weeks of bombing, Bobbio felt that Operation Allied Force had been incompetently executed, and produced a mess. Now expressing doubts that ethnic cleansing in Kosovo had started before the war, rather than being occasioned by it, he feared that a campaign to protect human rights was in the process of violating them. Yet this did not alter the war's general character, as an exercise of licit against illicit force. Habermas was right to maintain that international law was becoming--however imperfectly--institutionalized as a set of enforceable rules, in one of the most extraordinary and innovative developments of its history. Humanity was moving across the frontier from the moral to the juridical, as his German colleague had seen. [45] `Redeeming the irredeemable' By the time of the next Western military expedition, Bobbio had withdrawn from comment on public affairs. But in the Afghan war Habermas found vindication for his judgement of the trend of the time. Although the new Republican administration was deplorably unilateralist--even if European governments bore some responsibility for failing to sustain sager counsels in Washington--the coalition against terrorism put together by it was a clever one, and had acted with good reason to remove the Taliban regime. True, the staggering asymmetry in weaponry between the American armada in the skies and bearded tribesmen on the ground, in a country long victim of rival colonial ambitions, was a `morally obscene sight'. But now it was over, and there was no point in repining. For `in any case, the Taliban regime already belongs to history'. The un was still too weak to fulfill its duties, so the us had taken the initiative, as in the Balkans. But with the un-sponsored conference in Bonn to establish a new government in liberated Kabul, the outcome had been a happy step forward in the transition, which had begun with the establishment of no-fly zones over Iraq, from international to cosmopolitan law. [46] A year later, Habermas was less serene. The new National Security Strategy of the Republican administration was provocatively unilateralist. The United States should not invade Iraq without the authorization of the United Nations--although the German government was also wrong in refusing such an invasion in advance, rather than declaring its unreserved respect for whatever the Security Council might decide. There might have arisen something whose possibility Habermas had never imagined, `a systematically distorted communication between the United States and Europe', setting the liberal nationalism of the one against the cosmopolitanism of the other. [47] Once launched, Operation Iraqi Freedom confirmed these forebodings. On the one hand, the liberation of a brutalized population from a barbaric regime was `the greatest of all political goods'. On the other, in acting without a mandate from the United Nations, the us had violated international law, leaving its moral authority in ruins and setting a calamitous precedent for the future. For half a century, the United States had been the pacemaker of progress towards a cosmopolitan order vested with legal powers, overriding national sovereignty, to prevent aggression and protect human rights. Now, however, neo-conservative ideologues in Washington had broken with the reformism of un human-rights policies, in favour of a revolutionary programme for enforcing these rights across the world. Such hegemonic unilateralism risked not only stretching American resources and alienating allies, but also generating side-effects that `endangered the mission of improving the world according to the liberal vision'. Fortunately, the un had suffered no really significant damage from this episode. Its reputation would only be injured `were it to try, through compromises, to "redeem" the irredeemable'. [48] Such thoughts did not last long. Six months later, when the un Security Council unanimously passed a resolution endorsing the American occupation of Iraq and the client regime it had set up in Baghdad, Habermas offered no word of criticism. Though saddened by the change of political scene in America--`I would never have imagined that such an exemplary liberal country as the United States could be so indoctrinated by its government'--he now had no doubt that the Coalition Provisional Authority must be supported. `We have no other option but to hope that the United States is successful in Iraq'. [49] The response by the two philosophers to successive wars waged by the West after the collapse of the Soviet bloc thus exhibits a consistent pattern. First, military action by Washington and its allies is justified on normative grounds, invoking either international law (the Gulf), human rights (Kosovo, Afghanistan), or liberation from tyranny (Iraq). Then, qualms are expressed over the actual way that violence is unleashed by the righteous party (Gulf, Kosovo, Afghanistan, Iraq), in a gesture of humanitarian punctilio. Finally, these in turn are casually minimized or forgotten in the name of the accomplished fact. The tell-tale formula `in any case', peremptorily ratifying the deed once done, says everything. The political complexion of such positions is clear enough. What is most striking about them, however, is their intellectual incoherence. No-one could suspect Bobbio or Habermas of an inadequate background in logic, or inability to reason with rigour. Yet here philosophy gives way to such a lame jumble of mutually inconsistent claims and excuses that it would seem only bad conscience, or bad faith, could explain them. The best of states? Behind the dance-steps of this occasionalism--swaying back and forth between impartial principle, tender scruple and brute fact--can be detected a simpler drive shaping the theoretical constructions of all three thinkers. Rawls describes his Law of Peoples as a `realistic utopia': that is, an ideal design that withal arises out of and reflects the way of the world. Habermas's cosmopolitan democracy, a global projection of his procedural theory of law, has the same structure. Even Bobbio, in the past resistant to any such confusion between facts and values, eventually succumbed to his own, with sightings of a new signum rememorativum of historical development as humanity's improvement. In each case, the underlying wish is a philosophical version of a banal everyday inclination: to have one's cake and eat it. Against criticisms pointing to the disgraced reality of inter-state relations, the ideal can be upheld as a normative standard untainted by such empirical shortcomings. Against charges that it is an empty utopia, the course of the world can be represented as an increasingly hopeful pilgrimage towards it. In this va-et-vient between ostensible justifications by universal morality and surreptitious appeals to a providential history, the upshot is never in doubt: a licence for the American empire as placeholder for human progress. That this was not the original impulse of any of these thinkers is also clear, and there is something tragic in the descent that brought them to this pass. How is it to be explained? Part of the answer must lie in a d?phasage of thinkers whose outlook was shaped by the Second World War, and its sequels, in the new landscape of power after the end of the Cold War. Old age mitigates judgement of the final conceptions of Rawls or Bobbio. When he published The Law of Peoples, Rawls was already the victim of a stroke, and writing against time. When he pronounced on the Balkan War, Bobbio was over ninety; and no contemporary has written so movingly of the infirmities of such advanced years, in one of the finest of all his texts, De Senectute. But certainly, there was also long-standing blindness towards the global hegemon. In Rawls's case, veneration of totems like Washington and Lincoln ruled out any clear-eyed view of his country's role, either in North America itself or in the world at large. Regretting the us role in overthrowing Allende, Arbenz and Mossadegh--`and, some would add, the Sandanistas [sic] in Nicaragua': here, presumably, he was unable to form his own opinion--the best explanation Rawls could muster for it was that while `democratic peoples are not expansionist', they will `defend their security interest', and in doing so can be misled by governments. [50] So much for the Mexican or Spanish-American Wars, innumerable interventions in the Caribbean, repeated conflicts in the Far East, and contemporary military bases in 120 countries. `A number of European nations engaged in empire-building in the eighteenth and nineteenth centuries', but--so it would seem--happily America never joined them. [51] Habermas's vision of the United States is scarcely less roseate. Although undoubtedly culpable of lapses in such lands as Vietnam or Panama, Washington's overall record as a champion of liberty and law has been matchless--for half a century blazing the trail towards a disinterested cosmopolitan order. No exhortation recurs with such insistence in Habermas's political writing as his call to his compatriots to show unconditional loyalty to the West. The fact that Germany itself has usually been thought to belong to the West indicates the more specialized, tacit identification in Habermas's mind: intended are the Anglophone Allies who were the architects of the Federal Republic. If the United States looms so much larger than the United Kingdom in the ledger of gratitude and allegiance, this is not simply a reflection of the disproportion in power between the two. For Habermas, America is also a land of intellectual awakening in a way that Britain has never been. To the political debt owed General Clay and Commissioner McCloy was added the philosophical education received from Peirce and Dewey, and the sociological light of Mead and Parsons. This was the West that had allowed Germans of Habermas's generation to stand erect again. Against such a background, endorsement of American military interventions in the Gulf, the Balkans and Afghanistan came naturally. At the invasion of Iraq, however, Habermas baulked. The reason he gave for doing so is revealing: in marching to Baghdad, the United States acted without the authorization of the Security Council. But, of course, exactly the same was true of its attack on Belgrade. Since violation of human rights was, by common consent, far worse in Iraq than in Yugoslavia, why was a punitive expedition against the latter fully justified, but not the former? The difference, Habermas explains, is that the Balkan War was legitimated `after the fact', not only by the need to stop ethnic cleansing and supply emergency aid, but above all by `the undisputed democratic and rule-of-law character of all the members of the acting military coalition'--even if the us and uk had approached the necessary task in a less pure spirit than Germany, France, Italy or other European members of nato. Over Iraq, however, a once-united `international community' had split. The phrase, standard euphemism of every mendacious official broadcast and communiqu? from Atlantic chancelleries, speaks for itself. The political confines of the community that stands in for the world are never in doubt: `today, normative dissent has divided the West itself'. [52] Yet since, in Habermas's own words, there can be no greater good than liberating a people from a brutal tyranny, why should prevention of ethnic cleansing or provision of aid--presumably lesser objectives--supply General Clark with philosophical credentials denied to General Franks? It is plain that the crucial distinguo lies elsewhere: in European responses to American initiatives. So long as both sides of the Atlantic concur, the `international community' remains whole, and the un can be ignored. But if Europe demurs, the un is sacrosanct. So naively self-serving an assumption invites, in one sense, only a smile. What it points to, however, is the disintegration of a larger one. The West upheld in Habermas's credo was always an ideological figure, an unexamined topos of the Cold War, whose assumption was that America and Europe could for all practical purposes be treated as a single democratic ecumene, under benevolent us leadership. The unwillingness of Berlin and Paris to rally behind Washington in the attack on Iraq undid that long-held construction, rendering an unconditional orientation to the West meaningless. In this emergency, Habermas fell back on European values, now distinct from somewhat less commendable American ones, as a substitute lode-star in international affairs. But, setting aside the work of lustration required to yield an uplifting common ethos out of Europe's bloody past, or even its self-satisfied present, the new construct is as incoherent as the old. Not only does Europe, as currently understood by Habermas, have to exclude Britain, for undue similarity of outlook to the United States, but it cannot even encompass the continental states of the eu itself, a majority of whose members supported rather than opposed the liberties taken by the us with the un Charter. So in a further geopolitical contraction, Habermas has been driven to advocate a Franco-German `core' as the final refuge out of which a future and better eu, more conscious of its social and international responsibilities, may one day emerge, harbinger of a wider cosmopolitan order. [53] But this is a reculer pour mieux sauter without self-criticism. Habermas still appears to believe, heedless of well-advertised findings to the contrary, that nato's attack on Yugoslavia--for him, a last precious moment of Euro-American unity--was warranted by Belgrade's refusal to treat, and determination to exterminate. That the Rambouillet ultimatum was as deliberately framed to be unacceptable, furnishing a pretext for war, as the Austrian note to Serbia in 1914; that Operation Horseshoe, the plan for mass ethnic cleansing of Kosovo invoked by his Foreign Minister to justify the war, has been exposed as a forgery of the Bulgarian secret services; and that the number of Albanians in the region killed by Serb forces was closer to five than to the hundreds of thousands claimed by Western spokesmen--details like these can be swept under the ethical carpet as casually as before. For now Yugoslavia too, like the Taliban, `already belongs to history'. Even in Iraq, Habermas--in this like most of his fellow-citizens in Germany or France--objects only to the American invasion, not occupation of the country. The deed once consummated, it becomes another accomplished fact, which he wishes well, even if he hopes it will not be repeated. Leviathan on the Potomac Bobbio's embrace of American hegemony was quite distinct in origin. Unlike Habermas, he never showed any special attachment to the United States after 1945, or even much interest in it. Did he ever so much as visit it? No reference of any intellectual significance for him seems to have been American. His post-war sympathies went to Britain, where he inspected the Labour experiment and wrote warmly, if not uncritically, about it. During the high Cold War, he sought energetically to resist polarization between East and West, and when he became active in the peace movements of the seventies and eighties, he never put the United States on a higher moral or political plane than the ussr as a nuclear power, holding them equally responsible for the dangers of an arms race threatening all humanity. America, however, was `the more powerful of the two masters of our life and of our death', and it was therefore all the more discouraging to hear maxims from Reagan that could only be compared to the motto Louis xiv had inscribed on his cannon: Extrema ratio regis. [54] But when the unexpected happened, and Gorbachev lowered the Soviet flag, ending the Cold War with a complete American victory, there was in Bobbio's outlook one tenacious idea that allowed him to make a radical adjustment to the new world order. He had always maintained that the most viable solution to the problem of endemic violence between states was the creation of a super-state with a monopoly of coercion over all others, as guarantor of universal peace. During the Cold War he envisaged this hitherto Absent Third ultimately materializing in the shape of a world government, representing a de jure union based on a multiplicity of states. But when, instead, one existing state achieved a de facto paramountcy over all others of a kind never seen before, Bobbio could--without inconsistency--adapt to it as the unpredictable way history had realized his vision. America had become the planetary Leviathan for which he had called. So be it. The Hobbesian realism that had always distinguished him from Rawls or Habermas made him, who had been far more critical of the international order as long as the Cold War persisted, ironically capable of a much more coherent apology for the us empire once the Cold War was over. Hobbes could explain, as they could not, why the pax Americana now so often required resort to arms, if a juridical order protected by a global monopoly of force was finally to be created. `The law without a sword is but paper'. Bobbio's realism, what can be seen as the conservative strand in his thinking, had always coexisted, however, with liberal and socialist strands for which he is better known, and that held his primary moral allegiance. The balance between them was never quite stable, synthesis lying beyond reach. But in extreme old age, he could no longer control their tensions. So it was that, instead of simply registering, or welcoming, the Hobbesian facts of American imperial power, he also tried to embellish them as the realization of democratic values, in a way that--perhaps for the first time in his career--rang false and was inconsistent with everything he had written before. The triptych of liberation invoked as world-historical justification for the Balkan War is so strained as virtually to refute itself. The victory of one set of imperialist powers over another in 1918, with the American contribution to mutual massacre tipping the balance: a glorious chapter in the history of liberty? The D-Day landings of 1944, engaging less than a sixth of Hitler's armies, already shattered in the East: `totally responsible for the salvation of Europe'? [55] An apotheosis of Reagan for his triumph in the Cold War: who would have imagined it from the descriptions of Il terzo assente? There was something desperate in this last-minute refrain, as if Bobbio were trying to silence his own intelligence. Sparks of defiance It would be a mistake to deduce the late conclusions of all three thinkers in any simple way from the major body of their writing. That this is so can be seen from the chagrin of pupils and followers, steadfast in admiration for each man, but also loyal to what they felt was the original inspiration of a great oeuvre. Pogge's disappointment with The Law of Peoples, Matust?k's discomfort with Between Facts and Norms and dismay at plaudits for the Balkan War, the reproaches of Bobbio's students to the claims of Una guerra giusta?, form a family of similar reactions among cohorts less disoriented in the new international conjuncture. [56] Nor would it be right to think that involution was ever complete in these philosophical minds themselves. To the end, flashes of a more radical temper can be found in them, like recollections of a past self. For all his apparent acceptance of capital as an unappealable condition of modernity, ratified by the irresponsible experiment of communism, Habermas could yet write, less reassuringly for its rulers, of a system breeding unemployment, homelessness and inequality: `still written in the stars is the date that--one day--may mark the shipwreck of another regime, exercised anonymously through the world market'. [57] Bobbio, despite his approval of the Gulf and Balkan Wars, could in the interval between them denounce the `odious bombardments of Baghdad' ordered by Clinton, and the `vile and servile' connivance of other Western governments with them, as `morally iniquitous'. Few intellectuals then spoke so strongly. [58] Rawls offers perhaps the most striking, and strangest case of all. In the last year of his life, when he could no longer work on them, he published lectures he had given over a decade earlier, under the title Justice as Fairness. Beneath the familiar, uninspiring pleonasm lay a series of propositions at arresting variance with the tenor of Political Liberalism, let alone The Law of Peoples. It had been an error of A Theory of Justice, he explained, to suggest that a capitalist welfare state could be a just social order. The Difference Principle was compatible with only two general models of society: a property-owning democracy or liberal socialism. Neither of them included a right to private ownership of the means of production (as distinct from personal property). Both had to be conceived as `an alternative to capitalism'. Of the two, a property-owning democracy--Rawls hinted that this would be the more congenial form in America, and liberal socialism in Europe--was open to Marx's criticism that it would re-create unacceptable inequalities over time, and do little for democracy in the workplace. Whether his objections could be met, or liberal socialism yield better results, only experience could tell. On the resolution of these questions, nothing less than `the long-run prospects of a just constitutional regime may depend'. [59] Such thoughts are foreign to Political Liberalism. They outline, of course, only the range of ideal shapes that a just society might assume. What of actually existing ones? Rawls's answer is startling. After observing that favourable material circumstances are not enough to assure the existence of a constitutional regime, which requires a political will to maintain it, he suddenly--in utter contrast to anything he had ever written before--remarks: `Germany between 1870 and 1945 is an example of a country where reasonably favourable conditions existed--economic, technological and no lack of resources, an educated citizenry and more--but where the political will for a democratic regime was altogether lacking. One might say the same of the United States today, if one decides our constitutional regime is largely democratic in form only'. [60] The strained conditional--as if the nature of the American political system was a matter for decision, rather than of truth--barely hides the bitterness of the judgement. This is the society Rawls once intimated was nearly just, and whose institutions he could describe as the `pride of a democratic people'. In one terse footnote, the entire bland universe of an overlapping consensus capsizes. Reason and rage It is unlikely such flashes of candour were mere passing moments of disaffection. What they suggest is rather an acute tension buried under the serene surface of Rawls's theory of justice. Perhaps the most telling evidence for this is to be found in the unexpected entry of Hegel into his last published writings. Lectures on the History of Moral Philosophy concludes with a respectful, indeed admiring portrait of Hegel as a liberal philosopher of freedom. What drew Rawls, against apparent temperamental probability, to the philosopher of Absolute Spirit? His reconstruction of The Philosophy of Right pays tribute to Hegel's institutional insight that `the basic structure of society', rather than the singular individual, is `the first subject of justice', and sets out Hegel's theory of civil society and the state with historical sympathy. [61] Here too a sharp aside says more than all the glozing pages of Political Liberalism. Hegel's constitutional scheme, Rawls remarks, may well strike us, with its three estates and lack of universal suffrage, as a quaint anachronism. `But does a modern constitutional society do any better? Certainly not the United States, where the purchase of legislation by "special interests" is an everyday thing'. [62] Clinton's America as no improvement on Frederick William iii's Prussia: a more damning verdict is difficult to imagine. The principal interest of Hegel, however, lay elsewhere. For Rawls his most important contribution to political thinking, flagged at the outset of the relevant Lectures, and reiterated in Justice as Fairness, was his claim that the task of philosophy was to reconcile us to our social world. Rawls emphasizes that reconciliation is not resignation. Rather, Hegel saw Vers?hnung as the way in which we come to accept our political and social institutions positively, as a rational outcome of their development over time. [63] The idea of justice as fairness belongs to this conception of political philosophy as reconciliation, he explained. For `situated as we may be in a corrupt society', in the light of its public reason we may still reflect that `the world is not in itself inhospitable to political justice and good. Our social world might have been different and there is hope for those in another time and place'. [64] In these touchingly incoherent sentences, Rawls's philosophy breaks down. Our society may be corrupt, but the world itself is not. What world? Not ours, which we can only wish might have been different, but another that is still invisible, generations and perhaps continents away. The wistful note is a far cry from Hegel. What the theme of reconciliation in Rawls expresses is something else: not the revelation that the real is rational, but the need for a bridge across the yawning gulf between the two, the ideal of a just society and the reality of a--not marginally, but radically--unjust one. That Rawls himself could not always bear the distance between them can be sensed from a single sentence. In accomplishing its task of reconciliation, `political philosophy may try to calm our frustration and rage against our society and its history'. [65] Rage: who would have guessed Rawls capable of it--against his society or its history? But why should it be calmed? Rawls resorted to Hegel in his internal reflections on a constitutional state. On the plane of inter-state relations, Kant remained his philosopher of reference, as the theorist of conditions for a perpetual peace. So too for Habermas. But since Kant failed to envisage the necessary legal framework for a cosmopolitan order, as it started to take shape through the permanent institutions of the United Nations, Habermas, when he came to review the progress made since 1945, also looked towards the philosopher of objective idealism. Measured against the sombre background of the disasters of the first half of the twentieth century, he decided, `the World Spirit, as Hegel would have put it, has lurched forward'. [66] As we have seen, Bobbio was responsible for the most pointed appeal to Hegel of all. In one sense, he was more entitled to make it. Welcoming Hegel's idea of reconciliation as akin to his own enterprise of public reason, Rawls drew the line at his vision of the international realm as a domain of violence and anarchy, in which contention between sovereign states was bound to be regulated by war. Habermas's gesture enlisted Hegel, on the contrary, as a patron of cosmopolitan peace. The first could not square his Law of Peoples with the lawlessness of Hegel's states, the second could only enroll Hegel for pacific progress by turning him philosophically inside out. Bobbio, by contrast, could take the measure of Hegel's conception of world history, as a ruthless march of great powers in which successive might founds over-arching right, and invoke it in all logic to justify his approval of American imperial violence. Law was born of force, and the maxim of the conqueror--prior in tempore, potior in jure--still held. `However difficult it is for me to share the Hegelian principle that "what is real is rational", it cannot be denied that sometimes history has vindicated Hegel'. [67] At the end of the twentieth century, reason had once again proved to be the rose in the cross of the present. Yet three less Hegelian thinkers than these could hardly be imagined. The guiding light of all their hopes of international affairs remained Kant. In reaching out at the end for his antithesis, each in their different way engaged in a paradox destructive of their own conceptions of what a just order might be. Bobbio, who had most claim on Hegel, was aware of this, and tried to correct himself--he had intended not to justify, but only to interpret the course of the world in the register of the Rechtsphilosophie. There are coherent Hegelian constructions of the time, but they come from minds with whom these thinkers have little in common. Perhaps they would better have avoided wishful thinking by looking again at Kant himself, more realistic than his posterity in imagining a universal history for a race of devils. [1] Bobbio's essay first appeared in the revised third edition of Il problema della guerra e le vie della pace, Bologna 1989, and in English in Daniele Archibugi and David Held, eds, Cosmopolitan Democracy, Cambridge 1995, pp. 17-41. Habermas's essays appeared in, respectively, Die Einbeziehung des Anderen, Frankfurt 1996, pp. 192-236, and Die postnationale Konstellation, Frankfurt 1998, pp. 91-169; and in English in The Inclusion of the Other, Cambridge, ma 1998, pp. 165-202, and The Postnational Constellation, Cambridge 2001, pp. 58-112. [2] `Democracy and the International System', pp. 22-31. [3] Il terzo assente, Milan 1989, p. 115 ff. [4] See Realizing Rawls, Ithaca 1989, pp. 9-12; `Priorities of Global Justice', in Pogge, ed., Global Justice, Oxford 2001, pp. 6-23. [5] The Law of Peoples, Cambridge, ma 1999, p. 108; henceforward lp. [6] lp, p. 39. [7] lp, p. 97. For Rawls's cult of Lincoln, see inter alia Thomas Nagel, `Justice, Justice Thou Shalt Pursue', New Republic, 13 January 2000. [8] lp, pp. 119-20. [9] Die Einbeziehung des Anderen, pp. 139-40; The Inclusion of the Other, p. 115; henceforward ea and io. [10] Die Normalit?t einer Berliner Republik, Frankfurt 1995, pp. 177-9; A Berlin Republic: Writings on Germany, Lincoln, ne 1997, pp. 170-2; henceforward nbr and br. [11] Die Postnationale Konstellation, pp. 122-35; The Postnational Constellation, pp. 80-8; henceforward pk and pc. [12] pk, pp. 155-6; pc, p. 103. [13] pk, p. 89; pc, p. 56. [14] pk, pp. 162-6; pc, 108-11. [15] ea, pp. 221-4; io, pp. 189-91. [16] Vergangenheit als Zukunft, Zurich 1991, p. 30; The Past as Future, Lincoln, ne 1994, pp. 20-1; henceforward vz and pf. Rawls had explained that all major world religions were `reasonable' doctrines capable of accepting his principles of justice: Political Justice, New York 1993, p. 170. [17] pk, p. 181; pc, p. 121. [18] pk, pp. 191-2; pc, p. 128. Here too the reference--of `reasonably comprehensive doctrines'--is explicitly to Rawls. [19] Il problema della guerra e le vie della pace, Bologna 1984, pp. 113-4, 143-6; henceforward pgvp;Il terzo assente, pp. 34-8; henceforward ta. [20] pgvp, pp. 50-5; ta, pp. 60-8. [21] pgvp, pp. 83-6. [22] pgvp, p. 116; ta, pp. 49-50. [23] ta, p. 94. [24] pgvp (first edition), Bologna 1970, pp. 119-57. [25] Autobiografia, Bari 1999, p. 261. [26] pgvp, p. 111; ta, p. 135. [27] ta, pp. 108-9. [28] ta, p. 181. [29] lp, p. 48. [30] lp, pp. 99-102; Collected Papers, Cambridge, ma 1999, p. 572. [31] `I follow here Michael Walzer's Just and Unjust Wars. This is an impressive work, and what I say does not, I think, depart from it in any significant respect': lp, p. 95. [32] lp, pp. 48-9. [33] vz, pp. 19, 18, 23; pf, pp. 12, 11, 15. [34] vz, p. 64; pf p. 48; nbr, pp. 93-4, 108; br, pp. 88-9, 102. [35] vz, p. 28; pf, p. 18; `Letter to America', The Nation, 16 December 2002. [36] vz, p. 20; pf, p. 12. [37] vz, p. 22; pf, p. 14. [38] Una guerra giusta?, Venice 1991, pp. 39, 22, 48, 60; henceforward gg. [39] gg, pp. 23, 90. [40] `Bestialit?t und Humanit?t: ein Krieg an der Grenze zwischen Recht und Moral', Die Zeit, 29 April 1999; in English as `Bestiality and Humanity: a War on the Border between Law and Morality', in William Buckley, ed., Kosovo. Contending Voices on the Balkan Intervention, Grand Rapids, mi 2000, pp. 307-8, 312. [41] `Bestiality and Humanity', pp. 313-6. [42] `Bestiality and Humanity', pp. 309, 316. [43] Carl Schmitt, Politische Theologie, Munich and Leipzig 1922, p. 15. [44] `Perch? questa guerra ricorda una crociata', L'Unit?, 25 April 1999. [45] `La guerra dei diritti umani sta fallendo', L'Unit?, 16 May 1999. [46] `Fundamentalism and Terror', in Giovanna Borradori, Philosophy in a Time of Terror. Dialogues with J?rgen Habermas and Jacques Derrida, Chicago 2003, pp. 27-8. [47] `Letter to America', The Nation, 16 December 2002. [48] `Verschliessen wir nicht die Augen vor der Revolution der Weltordnung: Die normative Autorit?t Amerikas liegt in Tr?mmern', Frankfurter Allgemeine Zeitung, 17 April 2003; in English as `Interpreting the Fall of a Monument', Constellations, vol. 10, no. 3, 2003, pp. 364-70. [49] `Ojal? Estados Unidos tenga ?xito en Iraq', La Vanguardia, 4 November 2003. [50] lp, p. 53. [51] lp, pp. 53-4. [52] `Interpreting the Fall of a Monument', p. 366. [53] `Unsere Erneuerung--Nach dem Krieg: Die Wiedergeburt Europas' (with Jacques Derrida), Frankfurter Allgemeine Zeitung, 31 May 2003; in English as `February 15, or What Binds Europeans Together: A Plea for a Common Foreign Policy, Beginning in the Core of Europe', Constellations, September 2003, pp. 291-7. [54] ta, p. 208; written on 28 August 1983. [55] `Perch? questa guerra ricorda una crociata'. [56] See Thomas Pogge, Global Justice, pp. 15-7; Martin Beck Matust?k, J?rgen Habermas. A Philosophical-Political Profile, Lanham, md 2001, pp. 247-51, 269-74; Eleonora Missana, Massimo Novarino, Enrico Passini, Stefano Roggero, Daniela Steila, Maria Grazia Terzi, Stefania Terzi, `Guerra giusta, guerra ingiusta. Un gruppo di studenti torinesi risponde a Norberto Bobbio', Il Manifesto, 29 January 1991. [57] nbr, p. 17; br, pp. 12-13. [58] `Questa volta dico no', La Stampa, 1 July 1993. [59] Justice as Fairness, Cambridge, ma 2001, pp. 178-9; henceforward jf. [60] jf, p. 101. [61] Lectures on the History of Moral Philosophy, Cambridge, ma 2000, p. 366; henceforward lhmp. [62] lhmp, p. 357. [63] lhmp, pp. 331-2. [64] jf, pp. 37-8. [65] jf, p. 3. [66] ea, p. 207; io, p. 178. [67] `Perch? questa guerra ricorda una crociata'. From checker at panix.com Tue Jul 26 00:32:42 2005 From: checker at panix.com (Premise Checker) Date: Mon, 25 Jul 2005 20:32:42 -0400 (EDT) Subject: [Paleopsych] CHE: Whose Work Is It, Anyway? Message-ID: Whose Work Is It, Anyway? The Chronicle of Higher Education, 5.7.29 http://chronicle.com/prm/weekly/v51/i47/47a03301.htm [I won't be going to the hearings but will e-mail in material I have written on this issue. Basically, my idea to require copyright owners to pay $100 every five years after an initial twenty to keep their copyright live. This requirement will not be a large factor for the big publishing houses (which are doing the bulk of the lobbying). They will only have to decide whether certain books that they have let go out of print are likely to generate a profit if they are brought back into print. They will probably not do so unless their expectations are for profits several times greater than $100. If an old book is hardly selling at all, the effect of my proposal will be to make it go out of print (by allowing it to go into the public domain) a very little bit earlier than usual. [Does the $100 figure sound too high or two low? [I wonder how many books in print and under copyright are more than twenty years old anyhow. I also wonder how many books that go out of print ever come back into print. What about the age of print-on-demand books? [Does anyone know? [How to deal with photographs, daily newspapers, and so on, is something I haven't figured out. Obviously, no photographer could pay $100 for every photograph. [My proposal should most definitely be applied to sound recordings made before 1972! The failure of that 1976 revision of the copyright act to preempt State laws has caused nothing but headaches over trivial financial issues. (I realize that this violates my strong presumption for States' Rights, but it is, after all, only a presumption and I am not calling for a break-up of the Union over copyright issues.) Consider that EMI/Angel's "Great Recordings of the Century" Series contains only one recording before the end of WW II, namely one by the Cortot-Thibaud-Casals Trio, there's almost no profit to be made from 78s, only fees to lawyers, as the on-going Capitol vs. Naxos case shows.] ----------------------- The use of 'orphan works' of art and literature, whose creators cannot be identified, puts scholars and artists at odds over changes in copyright law List: [55]Discussions on orphan works, live and online By SCOTT CARLSON Like many other scholars across the country, Joseph Siry might have broken the law to illustrate an article he wrote for an academic journal -- by including an illustration without obtaining permission to do so from its copyright holder. Mr. Siry, who is usually meticulous about clearing copyrights, says he did his best to get permission for the illustration -- a sketch of a building, drawn by a collaboration of architects at several firms, that had influenced a Frank Lloyd Wright design some 50 years ago. But Mr. Siry, a professor of art history at Wellesley College, hit a series of dead ends: The architecture firms involved were out of business, and their onetime principals could not be found. The rendering had appeared in Life magazine, but staff members there told him that the magazine did not own the images. Nor did Life's archives have any record of people connected to the design. With no apparent owner to approve its publication, the image was stuck in copyright limbo, a prime example of what legal experts call an "orphan work." Mr. Siry made a difficult decision: He cited the little information he had about the design and used it in his article anyway, despite the risk of being sued for infringement if an architect turned up later with a legal claim to it. He was assured by the academic journal, he says, "that this risk was minimal." Still, he expresses discomfort over the choice he made. Many scholars, archivists, and librarians have stories like Mr. Siry's. Orphan works have led to complications not only in publishing but also in digitizing projects, preservation efforts, and the creation of works like film and video documentaries. This week, at the urging of prominent legal scholars, academic-library organizations, technology companies such as Google and Microsoft, and many other interested parties, the U.S. Copyright Office is holding a series of hearings to determine whether copyright law should change to allow for more liberal use of orphan works. Scholars and others weighed in earlier this year, filing comments on the issue with the copyright office in anticipation of the hearings. The American Historical Association, for example, noted that orphan works had become a problem for scholars, "hampering the historian's ability to work with the raw materials of history." The comments reveal that even frequent adversaries on copyright issues agree that changes are needed in how the law governs orphan works. But few people agree on what those changes should be. Many issues surrounding orphan works -- how they should be defined, vetted, and used, and how much a user should pay if a work's "parent" turns up later -- remain subject to vigorous debate, with various groups looking out for their interests. The music-licensing organizations Broadcast Music Inc. and Ascap have proposed that any orphan-works exemptions should not include music. Other parties have suggested that changes in law should apply only to domestically published works, while foreign works and unpublished works should remain strictly protected. (Foreign works must be protected to avoid violating international agreements, some lawyers say, and unpublished works may need to be off limits to protect the privacy of owners who might have preferred that the works remain unpublished.) And some groups -- in particular visual artists like photographers and illustrators -- strongly oppose any loosening of the law for orphan works, seeing it as an assault on copyright that will deprive artists and creators of their due. Many Ways to Orphan An orphan work can be a film, a book, a private letter, a painting, or any other creative work covered by copyright, in which protection, through the complexity of the law, can extend as far back as 1923. A work can become orphaned in any number of ways: For example, an artist can die, and the heirs may not know about the artist's copyrighted work. A company that published a novel might go out of business or fall into the hands of another company that does not maintain publication records. It is particularly hard to figure out who took a photograph, unless the name of the photographer or studio is cited somewhere on the print. Works like those add up to a great deal of published material, according to studies conducted by research libraries. Five years ago Carnegie Mellon University's library studied a sample of about 270 items from its holdings; librarians could not find the owners of 22 percent of the works. In response to the U.S. Copyright Office's request for comments, Cornell University librarians added up the money and time spent clearing copyright on 343 monographs for a digital archive of literature on agriculture. Although the library has spent $50,000 and months of staff time calling publishers, authors, and authors' heirs, it has not been able to identify the owners of 58 percent of the monographs. "In 47 cases we were denied permission, and this was primarily because the people we contacted were unsure whether they could authorize the reproduction or not," says Peter B. Hirtle, who monitors intellectual-property issues for Cornell's libraries. "Copyright is supposed to advance the sciences and arts, and this is copyright becoming an impediment to the sciences and arts." Restrictions on using orphan works, often imposed by risk-averse lawyers at colleges and museums, affect scholarly work in ways large and small. Wendy Katz, an assistant professor of art history at the University of Nebraska at Lincoln, had trouble finding the copyright owner of a painting she wanted to reproduce in a book. The museum that provided a picture of the painting could offer no help, and her search led only to another scholar who had published the painting without permission, after also failing to find the copyright owner. Ms. Katz, too, eventually published the painting, hoping that the owner would not turn up. The decision bothers her, as she is normally a supporter of copyright for artists, but she believes that scholars should get special consideration in cases like these. "I don't see publication harming the value of the objects," she says. "I'm not making any money from it, and the press is breaking even, if they are lucky." In its comments to the copyright office, the Center for the Study of the Public Domain, at Duke Law School, said whole generations of movies are at risk because of their orphan status. Film deteriorates more rapidly than other media, such as paper. Digitization projects could help preserve the films, but the center notes that donors are not inclined to pay for the costly digitization of movies that the public cannot see because of copyright restrictions. Model Proposal Those are the sorts of problems that Peter Andrew Jaszi, a law professor at American University, heard about at copyright conferences and meetings several years ago, before abandoned works were commonly known as "orphans." He encouraged his students at the law school's Glushko-Samuelson Intellectual Property Law Clinic to propose a solution. The clinic's response, filed with the copyright office this year, has come to be seen by many libraries and publishers as a model solution. Its basic points: An orphan work is any for which an owner cannot be found, regardless of how recently it was published or whether it was published at all. People should be able to use an apparently orphan work after "reasonable effort" to search for its owner, but the law should not spell out what that effort entails. If an owner turns up after a supposedly orphan work has been used, the owner should be able to collect a small amount -- from $100 to $500 -- but not obtain statutory damages, attorneys' fees, or injunctions. The Glushko-Samuelson proposal does not advocate establishing a registry of orphan works, but some copyright experts do. Lawrence Lessig, a Stanford University law professor, recommends requiring authors, musicians, and others to register their work within 25 years of publication. Software developers would get less time -- five years -- because software becomes obsolete much more quickly. A search of the government-supported registry would be enough to determine whether or not a work was an orphan. Proposals by other organizations diverge wildly, but most of them disagree on two main points: how an orphan work should be defined, and what a user should pay if an owner comes along after a work has been used. Of the two, the issue of payment is simpler. Some organizations, such as museums, have recommended paying nothing at all. The J. Paul Getty Trust, the Metropolitan Museum of Art, and the Solomon R. Guggenheim Foundation have suggested a "safe harbor" of five years after the use of an orphan work, beyond which point the owner of a work would not receive payment, although he or she could negotiate for its continued use. On the other side, comprising mainly groups that represent publishers and authors, some have proposed that a user should pay a "reasonable licensing fee" to the owner, based on what the user might have paid if the owner had been found before publication. By contrast, paying a small, fixed amount of only $100 to $500 "would cause a real unfairness for copyright owners," says Allan R. Adler, a lawyer for the Association of American Publishers, which filed comments jointly with the Association of American University Presses and a software-industry group. "If the user of the work refused to pay even that small amount," he says, "it wouldn't be worth going to court to collect it." Mr. Jaszi, the law professor, contends that stipulating only that a fee be "reasonable" is too vague a standard to be useful. "The problem with that proposal is uncertainty," he says. The reason that people do not use orphan works now, he argues, is that "they don't know what their exposure might be if they use something and end up with a cease-and-desist letter." 'It's a Pain' When hearings convene this week in Washington and on August 2 in Berkeley, Calif., the copyright office might hear far more wrangling over what types of works could be used as orphans -- or whether the issue is as pressing as some say it is. Jane C. Ginsburg, a professor of law at Columbia University, read through the comments submitted to the copyright office while she was submitting her own. "There are an awful lot of submissions that say, 'It's a pain in the butt to clear rights,'" she says. "That doesn't make a work an orphan work. Both internationally and domestically, you don't want this to be used as an excuse to screw individual authors." She is concerned that some of the proposed changes are inconsistent with international copyright agreements, such as the Berne Convention and the World Trade Organization's Trips agreement. As a result, she argues, international works should not be covered in any change of the rules. "These international norms have teeth," she says. If the United States creates an exception to copyright that does not comply with the terms of international treaties, she says, aggrieved countries can go through the WTO to impose sanctions on the United States. Concerns about international law aside, Ms. Ginsburg worries that individual artists and independent writers, not big publishers, have the most to lose from any change in copyright law because they do not have the money or influence to advertise ownership of their work. "Many of those who raise concerns about orphan works start from the premise that there are works that should be in the public domain because their authors don't care about them, and that they are clogging up the system and preventing subsequent authors and others from using them," she says. "That's not necessarily a correct premise." A group called the Illustrators' Partnership of America was formed on the basis of issues such as this. Illustrators, the group points out, are hard to trace if a picture appears uncredited in a book or online. "Visual artists are particularly harmed by this concept of declaring orphaned any work where the author can't be located or identified," says Cynthia Turner, a medical illustrator who is part of the group. "That just about covers all of our work. We are already having a lot of difficulty with our work being separated from its original publication and being thrown up on the Web and disseminated without our permission." Ms. Turner and Brad Holland, an illustrator whose work has appeared in Time and The New Yorker, argue that publishers and others will use orphan-works exceptions to exploit artists' work. "It would undermine our ability to control our rights and make a living from the work that we produce," Ms. Turner says. 'There Is a Market' Mr. Holland says he recently resold some illustrations that originally appeared in a book 30 years ago. "I never thought anyone would want to use them again," he says, "but it suddenly dawned on me that there is a market for these illustrations." Just because those illustrations were made long ago and were once thought unmarketable even by the artist, does not mean that he surrenders his rights to them, he says. Those illustrations would have been too old to register under Mr. Lessig's proposal, Mr. Holland notes. In general, he says, a registry requiring him to dig through and register thousands of illustrations is an unfair burden, and it is worse for photographers, who can take scores of pictures in a single year. "Lessig wants to argue that I need to register everything that I do, or it's an indication that I don't see any commercial value," Mr. Holland says. A registry would require him to dig through and register thousands of illustrations. "Peter Jaszi and Larry Lessig and these characters are all arguing that the purpose of copyright law is to bring work into the public domain as rapidly as possible," he says. The Illustrators' Partnership gathered the signatures of hundreds of artists and illustrators in the United States and other countries, including France, Ireland, and Mexico, and submitted them to the copyright office, along with a statement opposing orphan-works exceptions. Despite their opposition to mandatory registration, the illustrators are considering a voluntary registry using technology that can embed ownership information in a picture, to help people identify the owners of works. Mr. Jaszi sympathizes with those who worry about exploitation under new orphan-works rules, but he worries that various groups will pressure the copyright office and lawmakers to "peel off" the more challenging parts of orphan-works reform. "In other words, the narrower the coverage, the more likely it will slide through without controversy," he says. "Anyone who is interested in orphan works should watch to make sure that big chunks of material don't get pushed aside in an effort to make something happen." One of those waiting for the outcome is Leslie Humm Cormier, who teaches art history at Emerson College. Recently she was working on an article about the Modernist architect Josep Lluis Sert, of the firm Sert, Jackson, & Associates. She acquired photographs of Sert's work from the 1950s and 60s from his partner, Huson Jackson. But she learned that the firm did not own the photographs; the photographer did, and he was nowhere to be found. "I completed the work as well as I could, but I would have done much more in-depth work if I had a sense of freedom," she says. "It would have been a journal article. As it turned out, I put it in a lesser publication." Ms. Cormier, who recalls once being plagiarized by a newspaper writer, says the incident steeled her resolve never to use an item without permission. "It's severely limiting," especially for someone in her profession, she says. "It's a visual subject that I work in." TALKING ABOUT ORPHAN WORKS Public discussions on orphan works were scheduled to be held on July 26 and 27 at the U.S. Copyright Office, in Washington, and on August 2 at the University of California at Berkeley. More information on orphan works, including comments submitted on the issue, can be found at [64]http://www.copyright.gov/orphan The Association of Research Libraries maintains a Web site on orphan works at [65]http://www.arl.org/info/frn/copy/orphanedworks The Illustrators' Partnership of America also maintains a site that covers copyright issues at [66]http://www.illustratorspartnership.org References 55. http://chronicle.com/prm/weekly/v51/i47/47a03301.htm#talking 64. http://www.copyright.gov/orphan/ 65. http://www.arl.org/info/frn/copy/orphanedworks/ 66. http://www.illustratorspartnership.org/ From checker at panix.com Tue Jul 26 00:32:54 2005 From: checker at panix.com (Premise Checker) Date: Mon, 25 Jul 2005 20:32:54 -0400 (EDT) Subject: [Paleopsych] NY MetRo: Celebrity Psychos: The Summer They All Went Mad Message-ID: Celebrity Psychos: The Summer They All Went Mad http://www.printthis.clickability.com/pt/cpt?action=cpt&title=Celebrity+Psychos+-+The+Summer+They+All+Went+Mad&expire=&urlID=14873120&fb=Y&url=http%3A%2F%2Fwww.newyorkmag.com%2Fnymetro%2Fnews%2Fculture%2Ffeatures%2F12264%2F&partnerID=73272 [Click on the URL to get lots of photos of these celebrities] Celebrity and Its Discontents: A Diagnosis By [8]Vanessa Grigoriadis Our celebrities are mad as hell, and they're not going to take it anymore. They're on a dangerous rampage, and no one is safe. Christian Slater grabbed a woman's bottom outside an Upper East Side deli, and Russell Crowe had a tantrum lengthy enough for him to rip a phone out of the wall, take it down the elevator, and throw it into the face of a clerk at the city's most exclusive hotel. Dave Chappelle beat a quick path from his final Comedy Central tapings to South Africa, explaining he needed to go visit a friend, and Brad Pitt dyed his hair platinum (he got it done by Jen's hairdresser!), played public footsie with Angelina Jolie, and paid the price with viral meningitis. Courtney Love, the patron saint of celebrity craziness, has been quiet lately, but getting larger. Who knows when she may erupt again. This summer, all outbreaks are only sideshows to the concurrent breakdown of Michael Jackson during his trial and the more recent mania of Tom Cruise, two of the biggest and most mysterious stars in the world unmasked as stark-raving lunatics. (The reeducation of Katie Holmes, the Manchurian Fianc?e, continues apace.) Attack or be attacked: The other week, Leonardo DiCaprio was hanging out at a house party in the Hollywood Hills when one of the female guests hit him in the face with a bottle. This is a country of big, of mega, and these are megastars having megabreakdowns, and we are megainterested. Something is wrong with Chris Tucker too--caught speeding at 109 miles per hour, he recently led cops on a ten-mile chase, later explaining he was late for church. It seems that celebrity egos have gotten out of control. It seems that the celebrity system has gotten out of control. The $20 million against 20 percent of the gross, the sponsorship money, the lava of free stuff. The freedom, the immortality, the fact that you will never be found guilty in a jury trial. Mariah Carey becomes a star at 18, and she never has to think about the weather for her entire adult life. It seems there are so many more images of celebrities these days that there cannot help but be more out-of-control images, the curtain blasted to bits by the surveillance hive-mind that extends from paparazzi to stylist's assistant tipped out by Us Weekly to neighboring Delano cabana guests. Then it seems the craziness might be happening because the increase in watching is the very thing creating the craziness. Then it seems that the beginning, middle, and end of the celebrity life story is finding a way to get people to keep watching and loving the star forever, so at a time when they are more watched and more loved than at any other point in history, they should not be going so crazy. But they are. The celebrity houses stretch along the Malibu shore, one after another, like a string of diamonds. From these three- (Courteney Cox Arquette) or four- (Julia Roberts) or five- (Ray Romano) bedroom modernist boxes by Richard Meier's contemporaries, you can see the world clearly. From these soaring windows, the water is fine. They are built close to each other as in a city on their moss-covered stilts. Ten million dollars does not even secure a backyard, but the Pacific induces a state of Zen, and you even get a frisson of excitement that only the barest sliver of land separates you, George Clooney, from Halle Berry, or Mel Gibson from Britney and Kevin. Inside the glass bubble, you feel all alone. The only comfort is the wide-open ocean. But here, in the middle of the Pacific, is Frank Griffin, 55, British co-owner of an L.A.-based photo agency, separated father of an 8-year-old son, and not one of the worst kinds of stalkerazzi but not one of the best either. He stands on the bow of the Full Moon, the new 41-foot Cranchi boat he purchased with his spoils from the first Tom and Katie shot in Rome, the first Jennifer Lopez and Marc Anthony shot, one of the first Britney Spears bumps that later turned out to be not a bump, or, in a gruesome side effect of newfangled tabloid reportage, the first photo of a bump that might have been a bump that didn't last. Britney hates the paparazzi, especially now that she's huge, like huge huge: "On a Britney car chase," says a shooter for another agency, Splash, "you're thirteenth in a line of cars following this racing madwoman: There's nothing to do but close your eyes and hang on. It's so dangerous! It's my favorite part of the job." Today, Griffin is chill; he's mostly out here showing off his new boat to two of his young charges, Danny Young and Mustafa Khalili, 28-year-old Brits whose nationality is clear despite their American uniform of khaki shorts, baseball caps, and slim Pumas in primary colors. Khalili returned from the beaches of Waikiki yesterday on the trail of Justin Timberlake and Cameron Diaz, two big stars who may hate the paparazzi even more than Britney, but he came back empty-handed, no cove giving up secrets, no hotel valet with a price on information, and later Griffin is going to give him hell. First, though, they're going to check out Brad Pitt's house. The sea gets rougher as the boat turns upwind, houses streaming by faster now, the one with the dark wood (Stephen Dorff's) and the apricot one with green trimming (Leo's mom's) and finally the one Jennifer Aniston leases for upwards of $25,000 a month, which has all the blinds closed--no photo today of the bikini-clad America's Sweetheart reading a script on a chaise longue, that ubiquitous tabloid shot that tends to be followed by a caption about how she's recovering from a Shiatsu massage (Aniston, all tabloid readers know, gets massages daily). Griffin knows Aniston--his man in Chicago got her walking on Lake Michigan yesterday with her hairstylist, Chris McMillan. "He's trying to get Brad and Jen back together, but it's not going to work," declares Griffin, who has a lot of strong opinions on such topics. "When it came to Angelina, Jen couldn't forgive that; perhaps if they'd had children, he would've been more discreet. It was Angelina's choice to out the relationship, though, with the photographs from Kenya." He bangs his hand on the thin tan wheel. "Brad flew to Mombasa on a private plane! The information came from her camp!" A few whitecaps swirl around a buoy commandeered by happy seals. "That looks fun," says Young. "Until a shark comes along, and then--" He brings his hands together in a loud clap. The boat pulls near Brad's. Built into a cliff, the house has a long series of windows shaped like an eye, staring right at us. Griffin stares back and raises his binoculars. "Come on, Brad," he implores. "Give it up." At the most basic level, it's people like Griffin, with an army of furtive men with digital cameras, who are driving celebrities crazy. They are the snakes in the celebrity garden, lurking and leering, spoiling paradise. Or maybe they're more like Jagerettes, handing out shots and getting everyone drunk on the celebrity-industrial complex, a shape-shifting behemoth that compensates for fewer ticket sales by producing more personality-driven lip glosses. The tabloid business is growing as the entertainment business is shrinking; perhaps eventually the former will overtake the latter, and stars will still be playing themselves. The relationship between stars and paparazzi has certainly turned into bounty hunting, but it's not entirely clear that physical safety is the only reason stars have lobbied for the LAPD to begin an investigation into the paparazzi, given symbolic heft by the recent car accident between Lindsay Lohan and a "pap" on a trendy Nolita-esque corner of West Hollywood. Celebrities don't want to ignore the paparazzi anymore--the stories they fuel have gotten so big they're ending up on the CNN ticker. So life takes place behind half-drawn blinds. They should have known better when they moved in, or perhaps they've only just started to mind that Malibu, with a Nobu in the quaint mini-mall, has in the past few years become Star Country, and thus a leading spot for paparazzi, stalkers, starfuckers, fans, and all manner of untoward elements who seek to suck the energy right out of the star and leave no excess warmth of heart for him to bestow on the charity of his choosing. And Griffin is taking more than their pictures. Gossip, particularly of the unsourced British variety, is the leader in celebrity irritants, as discerned in a study of celebrity stressors by Charles Figley, a professor at Florida State University. The gossip keeps pouring in as we simultaneously honor and revile our celebrities in a more intimate manner than ever before; today is only another day in the inexorable progress of a full Britification of our celebrity press. What's important now is less the dissonance between actor and onscreen roles and more the difference between the image the celebrity is selling and the way he "really" is. Most of the paparazzi you come across in L.A. are Brits, relentless greyhounds of war with an attitude. "Americans can't do this job--they don't want to make $2,000 a day legally," sneers Griffin. The camera doesn't lie, you were in this place at that time, Jennifer Garner is clearly many months pregnant and having a shotgun wedding to Ben Affleck at Parrot Cay, but then there's the backstory too. (Are you really happy? Do you hate that Ben smokes? Are you secretly terrified of J.Lo?) Paparazzi, more than ever, are the sources on text accompanying photographs. Exaggeration is what tabloids traffic in, and photographers can be happy to oblige--they often submit text to editors along with their photos, text that can be phoned in from a place called Imagination. If not paparazzi, there's always someone else to sell you out. Even your own publicist. "Some publicists are part of the problem," says Ken Sunshine of Sunshine Consultants. "To get attention for their unknowns, people sell out their A-list clients, who are too dumb and too na?ve to realize this is being done to them. The income stream is a volume business." Lindsay Lohan and Paris Hilton have both claimed to have excommunicated friends whom they set up with false information that later appeared in print. (No word if each was the other's friend.) Finally, as any reader of the supermarket tabloids can tell you, truth and falsehood are beside the point. There's a phantom being, a doppelg?nger, out there with your name on it, and you can't control the way it's behaving. Once you have been cast in a story line, there is no way out. It's been a year since the tabs first wondered if Nick Lachey and Jessica Simpson were going to break up, and the cover of last week's Star magazine still asks, nick & jessica: over by xmas? It is now necessary for Jennifer Aniston to bounce back from her divorce and engage in a deep friendship with Vince Vaughn, her co-star in a new movie called, of course, The Breakup. Inevitably, in the next news cycle, Jen collapses on the way to dinner with friends because the divorce process is "taking an unmistakable toll." In the final indignity, the same photo of Aniston walking her dog may be used one week to show her independence--she's enjoying things for herself!--and the next week to demonstrate her unhappiness ("She tried to relax by taking her dog, Norman, on a long walk"). Of course, this is only the stuff that gets printed. Any bit of information posted on a blog flies across the Internet and sticks. The gaze is intense and permanent. "My clients are concerned about speed," says Leslee Dart of the Dart Group. "You print a false rumor, and within an hour, it's disseminated worldwide. The ability to set the record straight has become impossible." The expanding world market needs to be fed: When I was with Griffin, he got a call from his distributor about a new account in Croatia. "There you go," he crowed. "A few years ago, they're slaughtering each other, and now they're buying pictures of Britney Spears's crotch." These days, we talk about celebrities like they're our friends--or former friends. On a recent night at Koi, the trendiest sushi restaurant in all L.A., Kato Kaelin, older, ruddy-cheeked, in a fringed leather jacket, is the only celebrity inside. A middle-aged woman in a yellow pantsuit comes out of the restaurant and takes a picture of the paparazzi with her cameraphone. A tall couple in slightly too dressy evening attire slither toward the valet. "Who did we miss in there?" they ask each other. "Angelina and Brad," jokes the woman. "He's got her on the table. He's like, `I love sushi!' " "Sa-shimi!" says the man. It's so hard to be a star--and no one cares. Stars are not just like us. According to researchers, celebrities are four times as likely to commit suicide as noncelebrities and live, on average, thirteen years less than Joe and Jane Sixpack. Celebrities may receive substandard treatment at hospitals, victims of deferred medical tests or competition between surgeons for the honor of operating on a celebrity. Celebrities may experience more insomnia, migraines, and irritable-bowel syndrome. Celebrities are twice as likely to develop a serious alcohol problem. And who's to blame for this tale of famous woe? Well, Mommie Dearest, of course. "In every autobiography of a famous person, you find that a parent has either died, been punishing, or been terribly neglecting," says Sue Erikson Bloland, a psychoanalyst in private practice and daughter of ego psychologist Erik Erikson, whose childhood followed a similar pattern. This void is then filled by a mentoring figure, a grandparent or teacher or even the other parent, who makes a narcissistic investment in the star. The child grabs the chance at love, but it's a trap. Jessica Simpson's lifetime of encouragement from her father, the one who pushed her to sing and also made her promise to remain a virgin (his virgin) until she married, is all about reducing her to his puppet (a pretty puppet). Not content to leave the study of celebrities to tabloid body-language experts, the psychological community is coming to terms with celebrity psychopathology. The modern medical term--the famous term, the celebrity term, the superstar of psychological monikers--is acquired situational narcissism (coined by a doctor who may know whereof he speaks, since he refused an interview because he didn't appear in the "Best Doctors" issue of this magazine). Are the crazy drawn to Fame, or does Fame make them crazy? ASN claims the latter. To a celebrity, narcissism is a rational response to a world that functions as a mirror, amplifying one's positive self-image, the sense that one is in the absolute center. It arrives later than classical narcissism--which sets in between the ages of 3 and 5, once a realistic view of the world begins to develop--but the disorders are indistinguishable, with patients exhibiting the same grandiose fantasies, excessive need for approval, lack of empathy, anger, and depression (how fabulous). Fearful of exposing the real them, narcissists project a glorified self that becomes so ingrained it becomes impossible to tell what's real and what's made up. This is the self they start talking about in the third person. Everyone must love this self or it risks dissolution. There must be Omnipresent Love. Speech becomes impressionistic and lacking in detail--a symptom celebrity profilers well recognize. Celebrity, as John Updike wrote, is the mask that eats into the face. A study has shown that pop stars use personal pronouns in their songwriting three times more once they become famous; another study claims that the more famous one gets, the more one checks oneself in the mirror, and the more one's self-concept becomes self-conscious. It's a problem, to be both self-involved and self-conscious. A Tinseltown version of post-traumatic stress disorder develops. Danger is around every corner. "The same thing happens to celebrities that happens because of war, because you're in the middle of disaster, terrorism," says psychologist Robert Butterworth. Last month, Catherine Zeta-Jones's stalker was sent to prison after claiming she was going to blow Zeta-Jones's brains out like JFK or slice her up like Manson did to Sharon Tate unless she stopped having an affair with George Clooney, which she wasn't. Trapped in their bubble, celebrities experience arrested development. The celebrity becomes an adolescent, a developmental stage that is non-age-specific. The time is the time before the blows to self-esteem that lead to a mature, realistic view of one's weaknesses and strengths and a capacity for love that transcends self-love (Paris Hilton time). But once again, the world impedes. Someone, a fired masseuse or peevish younger sister, tells the celebrity that he is full of it, or he loses out on the new Steven Soderbergh movie. Impostor syndrome sets in, with its attendant sense of fraudulence. The star begins to notice he has a limited skill set based upon a fortunate genetic hand dealt him. Emotionally intuitive creatures, they realize they're surrounded by people smarter than they are--even their agents!--and that makes them insecure. Wary of the gap between the false and true self, the star overcompensates by developing a God complex. Important people request the star's largesse, as the many supplicating letters in Marlon Brando's recent estate auction demonstrate, even one from Martin Luther King Jr. ("I have been subject to great personal strife and am obliged to go to Court Thursday," Brando telegrams back. "I feel honored that you asked for what assistance I could give. I cannot at this time be of assistance.") The star may be told, like Madonna has been by the rabbis of Kabbalah, that she is the reincarnation of Queen Esther. The star may be the tool by which the message of a body like Scientology is meant to be disseminated across all lands. The overall multiaxial assessment: Completely Out of Their Mind Personality Disorder With Multiple Insane Features, or, more succinctly, Beyond Diagnosis. So who would want to be a star under these conditions? Listen to a star in the making: Ariel Gade, 8, at the premiere of mainstream horror flick Dark Water, when asked if she likes fame. "I love it," she says, her voice quavering with excitement. "I'm just having such a good time tonight!" Does she want to be famous? "I'd like to be a director. I think directors are the coolest people around." When I ask her if things were still the same with her friends, first she says yes, but then reconsiders: "Well," she says, scrunching up her exquisite face, "actually, I'm home-schooled, so I don't have any friends. But I do have cousins." She starts to walk away but stops short. "Oh, and by the way, this is a Bill Blass design," she says, holding out her pink tulle dress. "Bill Blass brought it over a few days ago, I don't remember exactly when. Bill Blass gave it to me as a little gift." (Which would have been nice, except Bill Blass is dead.) Paradise is hanging out at the most private--but not too private--places around, like the exquisite Ch?teau Marmont garden, which mortals are discouraged from entering after nightfall, or Bungalow 8, the subway-car-size Chelsea bar with no VIP room that makes stars feel "normal" because each banquette features stars like a Mary-Kate Olsen or a Jay-Z, so that everywhere you look there is a reminder that you are in the right place, you have not made a mistake, you are as special as they say. Homage will be paid from celebrity to celebrity: "I went up to Angelina Jolie at an awards thing, and I just, I couldn't help it, I started bawling," says Anne Hathaway, star of The Princess Diaries, at lunch at the Central Park Boathouse on a recent Wednesday. "She's been my favorite actress since I was 16. We watched each other in the eyes, and I could tell she had a beautiful soul. I guess she thought the same thing about me, because she asked me to go to Cambodia in association with her project. She said the sweetest thing: `Whenever I'm in a hotel room, I love watching your films, because even if it's three in the morning, it makes me so happy.' " No one has ever been safe in the House of Fame, though. Leo Braudy's definitive study of fame, The Frenzy of Renown, traces the earliest mention of this house to Ovid's Metamorphoses, where it rests on a mountaintop at the meeting point of land, sky, and sea. In Chaucer's fourteenth-century poem "The House of Fame," the house has become a castle with as many windows as snowflakes, packed with sorceresses and jugglers, magicians and wizards, celebrated singers like Orpheus and humble minstrels with bagpipes. A half-foot of solid gold covers the ceiling, walls, and floor of the great hall, where Fame herself presides from a throne made of ruby, her head extending to heaven and her body covered with as many "tongues as on bestes heres." Her herald, Eolus, the god of wind, holds a trumpet of Praise and a trumpet of Slander, blowing from them as Fame pleases. Tom Cruise, in all his lunatic effusiveness and paranoid defensiveness, is the definitive celebrity of this age. He's the boy in the bubble. He's said not to read his press, and has requested photo approval on shoots since his Risky Business days. One could not act as Cruise has if one understood how one's actions were being interpreted. One could not pop the question to Katie Holmes at a candlelit dinner at the Eiffel Tower and announce the news at a press conference less than eight hours later, nor claim that methadone was originally called adolophine because "it was named after Adolf Hitler," nor tell Matt Lauer, "There is no such thing as a chemical imbalance . . . Matt, Matt, Matt, Matt--you're glib." (A talk-show host, glib?) One could not be so forceful about such things unless one was Tom Cruise. "The exterior is only one covering," he has said, equally forcefully. "I do not have a fear of life or death." In the bubble, the Cruise makes his own rules, as was evident at the New York War of the Worlds premiere last month. Even though Hollywood protocol dictates Major Star arrival only once all other beings have been stuffed in the theater, Cruise arrived two hours early. He wanted to press flesh, fans, reporters, curious bystanders, but particularly his new fianc?e, whom he devoured with kisses. CAN I STEAL A KISS FROM TOM? read a placard held up by a fan. Katie shook her head. Katie does not speak. The hundred or so fans who got there early wore War of the Worlds T-shirts, and Cruise ran over to them, grabbing cell phones to say hello to mothers before he headed to the press line, where frantic arms stretched tape recorders over barricades. "We're from British TV," said one reporter. "I love Brits!" shrieked Cruise. "We're from Australian TV," said the next reporter. "I love the Aussies!" he yelled. The reporter from People magazine was shaking: "I have no idea what's on this tape," she whispered. "It was like we went into a trance and got all giggly and girly. Tom touched my arm--he gripped it." Other guests started to arrive, like Hulk Hogan: "I think AFTRA should elect me as the commissioner of Demolition Paparazzi with a kind of above-the-law license, and let me handle each of them on an individual basis," he said, twitching. Steven Spielberg strode in--this is his movie, Tom is his guy, and no one's messing with either of them. "The media has to make a lot of money the way that movies have to make a lot of money," he said. "I'm very grown-up about this. They need to get out of a media slump the same way everyone's like, `What's going to get Hollywood out of their movie slump in '04 and '05?' So when I see the media exploiting a couple, I know that's another industry trying to make a lot of money off of the celebrity of these people. Then they get weeks of a good episodic series called the Tom and Katie series, the Ben and Jennifer series, the Brad and Angelina series." He glowered. "The media and the movie industry don't always agree with each other, but they're both out to entertain," he said. "People should not be fooled." Howard Stern and girlfriend Beth Ostrovsky greeted Tom and Katie, then sashayed down the red carpet. "I can't believe the girl is 26 years old and still a virgin, but I do believe her," said Stern. "On my show, I'd ask, `What does that mean, "to hold out"? Everything but? What exactly?' " "Honey!" said Ostrovsky. "She was very nice to you about two minutes ago." "Am I being mean?" asked Stern. "I'm just curious. What if they get in bed and--who knows?--he doesn't like her backside. There could be all kinds of problems. Then there's the whole religion thing. Oh, I don't know where to end. There's all kinds of weird stuff going on there, jumping up and down on the couch on Oprah. I'm excited when I'm with a woman, but I don't jump up and down on a couch--" The Cruise did not hear any of this. He glided right past it. He was involved, steady, focused, making his way toward the theater as he took on questions about when he will get married, or if he feels competitive with Holmes--"I don't have rivalries," he said, "especially not with my love"--and how it feels to have Katie near him ("It's very exciting"). Will he do Broadway? "If I can find the right thing," he said. "I don't know of any piece of theater I'd like to be doing, but I like dancing. I like dancing." Now Cruise was at the door. He turned around one more time, looking back over all he saw, all these hundreds of people swarming toward him in midtown Manhattan, the whole world watching, everyone interested expressly in the Cruise. From inside the bubble, he waved, like the good witch in The Wizard of Oz. Of course, there's another Tom Cruise--a couple of them, actually. The good Tom Cruise has some questionable twins. There's the one who goes home and does God knows what with God knows whom--the real Tom Cruise. Then there's the one who haunts certain blogs and numerous conversations. Who among us would believe any of this preposterous stuff, but there it is, wherever you look on the Internet. Did Tom Cruise ask Scarlett Johansson, Jessica Alba, and Kate Bosworth to be his girlfriend before picking Holmes, who was in fact his fifth-choice girlfriend? Ridiculous. Did he promise Holmes a five-year contract, worth $10 million with no conjugal duties, to play his wife? Who makes this stuff up? Rob Thomas of Matchbox 20 has even gone on record denying that Cruise was caught in Thomas's bed by Thomas's wife. ("If I was gay, Tom Cruise wouldn't be on the top of my list," he said. "It would be Brad Pitt.") Word-of-mouth stories are even less believable, more like Eyes Wide Shut than anything that would happen to a megastar in the prime of his career. I bumped into a friend in the West Village last week who told me the most outlandish story of all: One time on Universal president Ron Meyer's boat, Cruise put a mask on, the same mask from Mission: Impossible, and wouldn't take it off. They docked and went to a nightclub. Cruise went to the bathroom. He met a guy. The guy wasn't interested. He ripped off the mask and declared, "But I'm Tom Cruise!" The only response to this kind of lunacy is "And I'm Marie Antoinette!" Cruise is a figure of fantasy, stalking our dream lives, as surely as the paparazzi stalk him. Except that many of us don't believe it's our dream life. Everybody thinks that they know what celebrities really do. They do it with gerbils, and with women not their wives, and under the influence of cocaine, and in bathrooms with people of the same sex. (Part of what's so satisfying about Paris Hilton is that, before she turned into Ivana Trump, every single atom of her being told you her real life was every bit as lurid as any figment of the gossip imagination.) People tell you things, and they have such a ring of truth to them: "I worked with a male movie star who, when he became a male movie star of stature, would actively work the casting couch--not only proclaim the size of his penis, which was gargantuan, but willingly say servicing it was part of the program," says a former agent. "Some women would run screaming from the room. Some would stay and become part of the movie. And I was his agent. I was his agent." When one makes about $80 million a picture, like Cruise does, one can pay for whatever handlers one wishes, and these handlers will become your friends, family, and confidants. (Just make sure they don't have cousins at In Touch!) L.A.'s population is exploding, and I'm not sure that it's not because people today are compelled to relocate to places where they could possibly work for, with, or near a celebrity. These days, a life as Julia Roberts's assistant is not a lost life, but a life blessed, transmogrified, made shiny by her presence. To be in the entourage of such a star, either as landscaper, organic-food preparer, or second assistant, is to be made whole. The people who help make stars beautiful are the ones they're closest to--they see the Real You before the fake one. Jennifer Aniston moved in with her hairdresser when she and Brad split up. Therapists are great, but they're hard to own--"You don't have time to treat more than one celebrity at once, unless it's Woody Allen," says psychologist Stuart Fischoff. "They say, `I want to make sure, Doc, that I can call you 24 hours a day, seven days a week.' Well, no, you can't. `No one sets limits on me!' " The aura of a celebrity extends over everyone he or she works with. "I've gotten thank-yous on albums, and that's really great," says Stuart Kaplan, star cosmetic dermatologist, multiple triple-platinum albums with plaques inscribed TO OUR DERMATOLOGIST hung throughout his Beverly Hills office. There he is, still at the office at 9:30 P.M., a lovable guy in blue Dickies, a Horace Mann graduate who misses New York but can't give up the swell life. "I treated a kid whose father was a director, and he said, `Somehow, somewhere, you'll have a character named after you in a movie,' " he says. Then he catches himself. "I am not a better doctor because I treat celebrities," he says. "I am a better doctor because of my charitable work." Spoken like a celebrity. Nowadays in the celebrity nuthouse, the inmates are running the asylum, only pretending that they're the ones under observation. Brad Pitt owns the international rights to the lusty 60-page W magazine spread that cast Angelina Jolie as his wife. Michael Douglas and Catherine Zeta-Jones sold their wedding pictures for ?1 million to OK!, the smarmy British tabloid that will open a U.S. office this fall and very likely broker more of such deals to the detriment of shallower-pocketed American tabloids. Gwyneth Paltrow and Chris Martin staged a paparazzi shot leaving her gynecologist after getting the news that she was pregnant, her brother's girlfriend behind the camera. Of course, for a narcissist, privacy is a relative concept. Often, it's just part of the performance. Private, when a celebrity uses the word, means many things, perhaps "I'm classy" or "I don't go to nightclubs" or "I'm shy," but what it rarely means is "I'm private" and certainly not when a semi-naked photo shoot is involved. A few months ago, good-girl actress Hilary Duff, 17, explained to me in an interview that she couldn't possibly divulge that she was dating rock singer Joel Madden--she was a private person, she said, and she had to save something for herself, otherwise what does one have? This made sense. Except a couple months later at the premiere of The Perfect Man, Duff's new movie, there was Madden, covered in tattoos, his hair arrayed in a black-dyed faux-hawk--Hilary's "Perfect Man," as the entertainment-news programs put it. He mumbled something about Hilary being a great girl. Public image, after all, is the business stars are engaged in. Nowadays, reality seems to be following fantasy, as stars become their tabloid selves. Angelina Jolie, best known for Tomb Raider, is now an A-list star. The affair with Brad Pitt has been a small price to pay. Manipulations of the machine can have real-life consequence. The May-December between Ashton Kutcher and Demi Moore, which began when both had projects to promote, has now produced a "bump." No one is being fooled, and no one is in control. The circus has no ringmaster. Yet everyone is getting some of what he wants. And isn't that what psychiatrists say a relationship is all about? Conditions Exhibitionism [DCM* 100.1] Celebrities with an exhibitionism disorder tend to favor halter tops, tattoos, even bare feet. Often (e.g., Paris Hilton, Rob Lowe) seen more memorably in amateur films than in professional ones. Example: Britney Spears [DCM 100.2] Dissociative Behavior Celebrities with dissociative-behavior disorder tend to behave in outlandish ways with no knowledge that others perceive their acts as out of the ordinary. Most often seen in megastars with extensive entourages. Example: Tom Cruise [DCM 100.3] Inappropriate Romantic Partners Celebrities who take inappropriate romantic partners only occasionally develop a sense of guilt and remorse, except (e.g., Hugh Grant) when a mug shot is involved. Examples: Brad Pitt and Angelina Jolie, Jennifer Aniston and Vince Vaughn [celebs050718_skinny_175.jpg] [DCM 100.4] Body Dysmorphism Body dysmorphism often develops in celebrities due to excessive media attention to secondary sexual characteristics (breasts, buttocks). The subsequent weight loss can be accompanied by guilt when the media focus on the celebrity's extreme thinness, and on the bad example being set for the nation's children. Examples: Nicole Richie, Lindsay Lohan, Mary-Kate Olsen * Diagnostic Celebrity Manual Root Causes Paparazzi Run-ins In recent years, as fees for photographs have escalated, paparazzi have become more aggressive and predatory. Stars are beginning to fight back. Example: Cameron Diaz Limited Social Circles Celebrities most often associate with other celebrities (indeed, this is one of the classic indications of narcissism), which exerts a distorting influence on their worldview and creates enormous competitive anxiety. Examples: Brad Pitt, George Clooney, Matt Damon Asocial Freedom The absence of adult responsibilities or any normative pressures often leads to dissociative behavior (e.g., couch-jumping), outlandish costumes (e.g., pajamas), and plain freakishness. Example: Michael Jackson Tabloid Incongruity The tabloids present a view of the celebrity world that is authoritative, though often not factually accurate or even internally consistent. Examples: Us Weekly, People [Celebrity Therapy] Gwyneth and Baby As the celebrity system has evolved, celebrities more and more are learning to control it. This mock paparazzi shot, of Gwyneth Paltrow and Chris Martin happily leaving her gynecologist's office, was actually taken by her brother's girlfriend and sold to the tabs. Additional reporting by Jada Yuan. References 8. http://newyorkmetro.com/nymag/author_91 From checker at panix.com Tue Jul 26 00:33:00 2005 From: checker at panix.com (Premise Checker) Date: Mon, 25 Jul 2005 20:33:00 -0400 (EDT) Subject: [Paleopsych] Need Help Sorting Through All the Articles I Send! Message-ID: I suffer extreme time compression from trying to read all the articles I send and would appreciate your highlighting exceptionally good ones for me. And when you comment upon them, be very specific if you want a response. I greatly enjoy providing this service, but time is a huge, huge problem. You might also tell me generally what your preferences are. I am avoiding most politics, since it rests on making crises out of non-crises by setting them in the terms of our nation vs. your nation. Mr. Mencken noted this a very long time ago: "The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary" (H.L. Mencken, _In Defence of Women_). And it gives a field day to rent-seekers. I shall leave it to others to analyze the particulars of particular cases. From shovland at mindspring.com Tue Jul 26 06:21:06 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Tue, 26 Jul 2005 08:21:06 +0200 (GMT+02:00) Subject: [Paleopsych] new song Message-ID: <10704899.1122358866383.JavaMail.root@wamui-blood.atl.sa.earthlink.net> good stuff michael -----Original Message----- From: Michael Christopher Sent: Jul 25, 2005 8:55 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] new song I wrote my first song lyrics in a long time, it's a love letter to everyone involved in the "culture wars": Reprise Encapsulated in our cars immunized from the stars we were so united at the start how did we drift so far apart a dubious distinction to have it all undone by our insecure insistence on being number one turn around and face the morning sun listen to the messengers of dawn put away your armor and your scripted litany learn at last to love your enemy the apple of your eye is poison to your soul a good man in a bad crowd is a bird in a black hole the venom that we swallowed is the virus of control how could subdivision make us whole? we hide behind these escalating masks of enmity why is it so painful to agree? we only see our neighbor down the barrel of a gun we'll all go into the abyss as one time plays a game with our hearts an endless war of light and gravity divide the tribes and polarize the family and learn to stand the holy ground of unity no one left to criticize it's only us we demonize turn around and face the morning sun listen to the messengers of dawn put away your armor and your scripted litany learn at last to love your enemy why are we so taken by the proud appealing to the madness of the crowd applauding when they show their snow white fleece and crucify the ones who stand for peace who will choose the ones who have no choice who will stand for those who have no voice why are we still on our knees, paralyzed with fear the ones that we are waiting for are here turn around and face the morning sun listen to the messengers of dawn put away your armor and your scripted litany learn at last to love your enemy Michael Michael C. Lockhart http://www.soulaquarium.net Blog: http://shallowreflections.blogspot.com/ Yahoo Messenger:anonymous_animus "The most dangerous things in the world are immense accumulations of human beings who are manipulated by only a few heads." - Carl Jung "We are stardust, we are golden, We are billion year old carbon, And we've got to get ourselves back to the garden." Joni Mitchell ____________________________________________________ Start your day with Yahoo! - make it your home page http://www.yahoo.com/r/hs _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From checker at panix.com Tue Jul 26 19:38:30 2005 From: checker at panix.com (Premise Checker) Date: Tue, 26 Jul 2005 15:38:30 -0400 (EDT) Subject: [Paleopsych] NYT: To Reduce the Cost of Teenage Temptation, Why Not Just Raise the Price of Sin? Message-ID: To Reduce the Cost of Teenage Temptation, Why Not Just Raise the Price of Sin? http://www.nytimes.com/2005/07/25/business/25consuming.html By DAVID LEONHARDT WHEN you look back on all the attempts to curb teenage drinking, smoking and drug use over the last couple of decades, you start to ask yourself a question that countless parents have asked: Does anybody really know how to change a teenager's behavior? Sometimes the government and advocacy groups have used straight talk, like Nancy Reagan's "Just Say No" campaign. Other times they have tried to play it cool. They drop an egg into a sizzling frying pan and announce, "This is your brain on drugs," or they print mock advertisements that pretend to market cancer. It all feels like a delicate exercise in adolescent psychology. Much of this back and forth is unnecessary. There is in fact a surefire way to get teenagers to consume less beer, tobacco and drugs, according to one study after another: raise the cost, in terms of either dollars or potential punishment. In just about every state that increased beer taxes in recent years, teenage drinking soon dropped. The same happened in the early 1990's when Arizona, Maryland, New Jersey and a handful of other states passed zero-tolerance laws, which suspend the licenses of under-21 drivers who have any trace of alcohol in their blood. In states that waited until the late 90's to adopt zero tolerance, like Colorado, Indiana and South Carolina, the decline generally did not happen until after the law was in place. Teenagers, it turns out, are highly rational creatures in some ways. Budweisers and Marlboros are discretionary items, and their customers treat them as such. Gasoline consumption, by contrast, changes only marginally when the price of a gallon does. "When people think about drugs, alcohol, even cigarettes, they think about addiction and this strong desire to consume them. They don't think price has an effect," said Sara Markowitz, an economist at Rutgers University in Newark, who studies public health. "That's just wrong. And it holds among kids even more so than among adults." Not only that, but unprotected sex tended to become less common after the changes in the law, according to studies. Gonorrhea and H.I.V. rates dropped. So did drunken-driving deaths and, for boys, suicides. Whatever the policies' downsides - and they are not insignificant - they have some of the clearest benefits of any government action. They are also a useful reminder of how often the power of incentives is underestimated. Taste, style, trendiness and advertising all do affect human behavior. A study in the Archives of Pediatric and Adolescent Medicine this month, for example, found that antitobacco television ads do seem to reduce smoking. But nothing has quite the sway that an economic carrot or stick does. When a big superstore moves into town, many shoppers who claim to prefer the coziness of mom-and-pop stores trek out to the megamall for the lower prices. (You know who you are.) When the government cut welfare payments in the 1990's, many people who had been receiving them went back to work. Even when inscrutable teenagers and addictive substances are involved, the basic dynamic does not change. Tax increases on alcohol and tobacco have been fairly common in recent years, allowing researchers to look for the crucial before-and-after effect that helps separate correlation from causation. Alaska, Nebraska, Nevada, Tennessee and Utah have all increased alcohol taxes since 2002. Georgia, Kentucky, Tennessee and Virginia - tobacco-growing states all - are among those that have raised cigarette taxes. Just because states with higher taxes have lower teenage drinking and smoking rates does not mean that one caused the other. An outside force - like a highly educated population, which might tend to eschew beer and cigarettes but vote for higher taxes - could instead be the underlying cause. But if drinking or smoking always seems to fall after a tax increase, then the case becomes far stronger. Looking across the states and taking into account all the other factors that can be measured, researchers have found that a 1 percent increase in the price of beer leads to a drop in teenage consumption of between 1 and 4 percent, Dr. Markowitz said. For cigarettes, a 1 percent price increase causes roughly a 1 percent decline in smoking. Using the same method, researchers can also answer a question that has long occupied public health specialists. It is generally accepted that youngsters who drink, smoke and use drugs are also more likely to take dangerous risks, like having unprotected sex. But does one lead to the other? Or as Christopher Carpenter, an economist at the University of California, Irvine, puts it, are there simply "bad kids" given to misbehaving in all sorts of ways? Depending on your definition of misbehavior, the answer is both. When alcohol taxes rose, the number of teenagers who reported having had sex in recent months did not change, according to a study by Michael Grossman of the City University of New York and Dr. Markowitz. Nor did the number of partners they had. But fewer teenagers had unprotected sex. The number of new gonorrhea cases and - though the evidence on this was weaker - new H.I.V. cases also dropped. Since teenagers get these diseases at far higher rates than the rest of the population, any decline can be a big deal. The enactment of zero-tolerance driving laws also appeared to lead to a fall in sexually transmitted diseases. For boys between the ages of 15 and 20, suicide rates fell 7 percent to 10 percent after a law was put in place, Dr. Carpenter found. (The fact that the effects seem to be concentrated among boys and whites is a mystery that awaits future research.) "When zero-tolerance laws were being debated, it wasn't like, 'Let's reduce drunk-driving deaths - and gonorrhea and suicide,' " he said. "This is an unintended, surprising consequence." Zero-tolerance laws also have the advantage of being aimed specifically at teenagers. New alcohol taxes, on the other hand, take money from millions of people who do not spread venereal diseases or drive drunk. But zero tolerance is now the law of the land in all 50 states, Dr. Carpenter said. There is no more public health uptick to get from it. So until somebody comes up with a smart new incentive, another "Don't Drink and Drive" campaign might be the best tool out there. From checker at panix.com Tue Jul 26 19:38:43 2005 From: checker at panix.com (Premise Checker) Date: Tue, 26 Jul 2005 15:38:43 -0400 (EDT) Subject: [Paleopsych] Boston Globe: Why do suicide bombers do it? Message-ID: Why do suicide bombers do it? http://www.boston.com/news/globe/ideas/articles/2005/07/03/why_do_suicide_bombers_do_it?mode=PF By Christopher Shea | July 3, 2005 Four years ago, the late Susan Sontag was excoriated for arguing, in a brief New Yorker piece, that the attacks that brought down the World Trade Center were inspired not by hatred of ''civilization" or ''the free world" but rather by opposition to ''specific American alliances and actions." Today that argument--seen by hawks in those dark post-Sept. 11 days as treasonously empathetic--has become a commonplace in the latest political science work on terrorism. No one, for example, is hurling charges of crypto-treason at Robert A. Pape, an associate professor of political science at the University of Chicago known for hard-nosed studies of air power in wartime. But Pape's new book, ''Dying to Win: The Strategic Logic of Suicide Terrorism" (Random House), which grew out of a much-cited 2003 article in the American Political Science Review, is prime example of the mainstreaming of Sontag's once-taboo view. ''Suicide terrorism is a response to occupation," Pape says in a phone interview. ''Islamic fundamentalism has very little to do with it." ''Dying to Win" draws on a thorough database of all suicide attacks recorded since the contemporary practice was born during the Lebanese civil war in the early 1980s: a total of 315 incidents through 2003, involving 462 suicidal attackers. Of the 384 attackers for whom Pape has data, who committed their deeds in such danger zones as Sri Lanka (where the decidedly non-fundamentalist, quasi-Marxist Tamil Tigers have used suicide attacks since 1987 in their fight for a Tamil homeland), Israel, Chechnya, Iraq, and New York, only 43 percent came from religiously affiliated groups. The balance, 57 percent, came from secular groups. Strikingly, during the Lebanese civil war, he says, some 70 percent of suicide attackers were Christians (though members of secular groups). The thrust of his argument is that suicide terrorism is an eminently rational strategy. Everywhere it has been used, the countries that face it make concessions: The United States left Lebanon; Israel withdrew from Lebanon and now (much of) the West Bank; and Sri Lanka gave the Tamils a semiautonomous state. Since occupation spurs terrorism, Pape concludes that America should ''expeditiously" (but not recklessly) withdraw troops from Iraq. It should also reduce its energy dependence on the Middle East, refrain from posting troops in the Gulf States, and return to a strategy of balancing the Middle Eastern countries against one another from afar--policy prescriptions that have inspired criticism apart from his social science. (''Wouldn't [Pape's recommendations] be the ultimate concession to the suicide strategy?" Martin Kramer, a specialist in Middle Eastern studies, asked after the 2003 article appeared.) In the views of some critics, Pape's original article erred by dismissing all talk of religious or cultural factors in suicide bombings. If suicide attacks were a universally rational weapon of the weak, the critics argued, we would see them everywhere--and we don't. In fact, in a fascinating contribution to the new essay collection ''Making Sense of Suicide Missions" (Oxford), the Yale political scientist Stathis Kalyvas and a Spanish colleague, Ignacio Sanchez Cuenca, point out that FARC, the Columbian rebel group, once hatched a plan to fly a plane into that country's presidential palace but could find no willing pilot, even after dangling an offer of $2 million for the pilot's family. In addition, the Basque group ETA has rejected offers from its members to blow themselves up for the cause. But in the book, Pape reconsiders those cultural factors: Suicide bombing, he now writes, is most likely to happen when the occupying force and the ''occupied" insurgents are from different religious backgrounds. (The Tamil minority in Sri Lanka are mostly Hindu and Christian; the Sinhalese majority are Buddhists.) Research by other scholars backs up this point. David Laitin, a Stanford University expert on civil wars, and Eli Berman, an economist at the University of California at San Diego, have demonstrated that while only 18 percent of the 114 civil wars since 1945 have pitted members of one religious group against another, fully 90 percent of suicide attacks take place in inter-religious conflicts. Laitin and Berman, too, view suicide terrorism as following impeccable game-theory logic: When your targets are ''hard" and the enemy is wealthy, well armed, and possessed of good intelligence, they write, suicide bombing begins to make sense as a strategy. However, Diego Gambetta, an Oxford University sociologist and the editor of ''Making Sense of Suicide Missions," thinks these claims of rationality among self-immolators go a bit too far. First, do the attacks achieve as much as Pape contends? Israel had already committed to pulling out of the West Bank under the Oslo accords when a fresh wave of attacks came in 1994 and 1995. Far from causing the withdrawal, he argues, the attacks may in fact have heightened Israeli resistance to it. Then there's the question of Islam. There may be non-Islamic suicide bombers, Gambetta writes. But ''we do not have even a single case of a non-Islamic faith justifying" suicide missions. Gambetta makes a tentative cultural-historical argument, tracing the suicidal impulse in the Middle East back to the Iran-Iraq war, when thousands of fundamentalist Iranian soldiers marched into certain death against Iraqi tank formations. That strain of self-sacrifice then spread into Lebanon and Palestine and now Iraq, through a badly understood dynamic. Conflicting theories aside, social scientists have made strides in understanding suicide bombers. Once considered the dregs of the earth (poor, uneducated, sexually starved), they have been shown--by Claude Berrebi, of the RAND Institute, among others--to be, on average, better educated and better off than their countrymen. Nevertheless, all the work on suicide terrorism has one major, merciful shortcoming: sample size. ''No matter how you count terrorist attacks, we are still well short of 1,000 of these episodes" since 1980, Gambetta says. Hard as it is to believe amid the grim daily dispatches from Iraq, suicide bombing remains, among the infinite numbers of ways humans cause bloodshed, exceedingly rare. Christopher Shea's Critical Faculties column appears in Ideas biweekly. E-mail [2]critical.faculties at verizon.net. From checker at panix.com Tue Jul 26 19:38:51 2005 From: checker at panix.com (Premise Checker) Date: Tue, 26 Jul 2005 15:38:51 -0400 (EDT) Subject: [Paleopsych] WT: Reading minds of suicide bombers Message-ID: Reading minds of suicide bombers http://www.washingtontimes.com/functions/print.php?StoryID=20050724-121809-5105r By David R. Sands Published July 24, 2005 _________________________________________________________________ They are not crazy. They are not coerced. And in most cases, researchers believe, the suicide bombers attacking U.S. forces and their Iraqi allies in ever greater numbers aren't even Iraqis. A startling surge of deadly attacks across Iraq -- with hundreds killed in recent months -- has U.S. officials and private terrorism specialists scrambling to identify and understand the motivations of the suicide bombers. Given the grisly nature of most of the attacks, forensic evidence has been hard to find: A 20-year-old Saudi medical student is believed responsible for the attack last year in a U.S. Army mess tent that killed 22 and a Yemeni national was captured when his bomb failed to explode. But the vast majority of attackers have not been positively identified. Terrorism scholars say the attackers in Iraq mirror many of the patterns seen in other suicide terror waves, from Sri Lanka's Tamil Tigers to Palestinian Islamist groups targeting Israel. "These are willing volunteers. I have yet to find a single case of true coercion among suicide attackers," said Robert A. Pape, a political scientist at the University of Chicago and a leading scholar of modern suicide terrorism movements. "They're working within a defined organization with political goals, and most are socially well integrated -- technicians, ambulance drivers or some other midlevel occupation," he said. Suicide terrorists as a group are "rarely ignorant or impoverished," according to University of Michigan psychologist and anthropologist Scott Atran in a study last year published in the Washington Quarterly. "Nor are they crazed, cowardly, apathetic or asocial." The bombers benefit from a sophisticated network of handlers who offer safe houses and weapons, U.S. officials in Baghdad say. Repeated security sweeps have been unable to penetrate networks bringing militants from Saudi Arabia, Egypt, Yemen and elsewhere to blow themselves up in Iraq. Contradicting another stereotype, suicide bombers in Iraq are in their late 20s or early 30s, many from the Arabian Peninsula or North Africa with families and well-established ties in their communities. As in the July 7 subway bombings in London, the bombers typically have little or no history of violence or religious activism. A new study by the Global Research in International Affairs Center in Israel found that virtually all of the 154 non-Iraqi Arab fighters killed in Iraq by coalition forces "have never taken part in any terrorist activity prior to their arrival in Iraq." Audrey Kurth Cronin, a terrorist specialist at the Congressional Research Service, noted in an analysis of suicide terrorism that the popular image that the attacks are carried out by "individual deranged fanatics" is "almost never the case." Suicide attacks share many of the characteristics of all terrorist strikes -- including gaining attention to the cause, anger, revenge, personal humiliation and retribution, she noted. Based on claims from Islamist Web sites and broadcasts, terrorism specialists estimate that perhaps three-fifths of Iraq's suicide attacks are carried out by Saudi nationals, coming in through the porous Syrian border. Mr. Pape said his research of suicide attacks worldwide over the past two decades finds Iraq very much in the pattern. Religion is far less of a factor than politics, he concludes. "In the vast majority of cases, the central objective of the suicide terrorist campaign has been to force a democratic state to leave an occupied homeland," he said, citing cases ranging from Sri Lanka and Lebanon in the 1980s and 1990s to Chechnya and Iraq today. Mr. Pape said in a telephone interview it was unlikely the terrorist networks would run out of suicide recruits, given the flow of foreign fighters into Iraq and the resistance of many Iraqi Sunnis to the U.S.-backed government. But some critics argue Mr. Pape's work understates the importance of religion -- and particularly radical Islamist ideology -- in modern suicide terror campaigns. Most of the suicide terror campaigns of the past two decades have been organized by Islamist groups. From checker at panix.com Tue Jul 26 19:39:23 2005 From: checker at panix.com (Premise Checker) Date: Tue, 26 Jul 2005 15:39:23 -0400 (EDT) Subject: [Paleopsych] SW: On Scale and Complexity Message-ID: Theoretical Biology: On Scale and Complexity http://scienceweek.com/2005/sw050729-3.htm The following points are made by Neil D. Theise (Nature 2005 435:1165): 1) Complexity theory, which describes emergent self-organization of complex adaptive systems, has gained a prominent position in many sciences. One powerful aspect of emergent self-organization is that scale matters. What appears to be a dynamic, ever changing organizational panoply at the scale of the interacting agents that comprise it, looks to be a single, functional entity from a higher scale. Ant colonies are a good example: from afar, the colony appears to be a solid, shifting, dark mass against the earth. But up close, one can discern individual ants and describe the colony as the emergent self-organization of these scurrying individuals. Moving in still closer, the individual ants dissolve into myriad cells. 2) Cells fulfill all the criteria necessary to be considered agents within a complex system: they exist in great numbers; their interactions involve homeostatic, negative feedback loops; and they respond to local environmental cues with limited stochasticity ("quenched disorder"). Like any group of interacting individuals fulfilling these criteria, they self-organize without external planning. What emerges is the structure and function of our tissues, organs and bodies. 3) This view is in keeping with cell doctrine -- the fundamental paradigm of modern biology and medicine whereby cells are the fundamental building blocks of all living organisms. Before cell doctrine emerged, other possibilities were explored. The ancient Greeks debated whether the body's substance was an endlessly divisible fluid or a sum of ultimately indivisible subunits. But when the microscopes of Theodor Schwann (1810-1882) and Matthias Schleiden (1804-1881) revealed cell membranes, the debate was settled. The body's substance is not a fluid, but an indivisible box-like cell: the magnificently successful cell doctrine was born. 4) But a complexity analysis presses for consideration of a level of observation at a lower scale. At the nanoscale, one might suggest that cells are not discreet objects; rather, they are dynamically shifting, adaptive systems of uncountable biomolecules. Do biomolecules fulfill the necessary criteria for agents forming complex systems? They obviously exist in sufficient quantities to generate emergent phenomena; they interact only on the local level, without monitoring the whole system; and many homeostatic feedback loops govern these local interactions. But do their interactions display quenched disorder; that is, are they somewhere between being completely random and rigidly determined? Analyses of individual interacting molecules and the recognition that at the nanoscale, quantum effects may have a measurable impact, suggest that the answer is yes.[1-3] References: 1. Theise N. D. & d'Inverno, M. Blood Cells Mol. Dis. 32, 17-20 (2004) 2. Theise N. D. & Krause D. S. Leukemia 16, 542-548 (2002) 3. Kurakin A. Dev. Genes Evol. 215, 46-52 (2005) Nature http://www.nature.com/nature -------------------------------- Related Material: PHYSICS AND COMPLEXITY The following points are made by Gregoire Nicolis (citation below): 1) For the vast majority of scientists physics is a marvelous algorithm explaining natural phenomena in terms of the building blocks of the universe and their interactions. Planetary motion; the structure of genetic material, of molecules, atoms or nuclei; the diffraction pattern of a crystalline body; superconductivity; the explanation of the compressibility, elasticity, surface tension or thermal conductivity of a material, are only a few among the innumerable examples illustrating the immense success of this view, which presided over the most impressive breakthroughs that have so far marked the development of modern science since Newton. 2) Implicit in the classical view, according to which physical phenomena are reducible to a few fundamental interactions, is the idea that under well-defined conditions a system governed by a given set of laws will follow a unique course, and that a slight change in the causes will likewise produce a slight change in the effects. But, since the 1960s, an increasing amount of experimental data challenging this idea has become available, and this imposes a new attitude concerning the description of nature. Such ordinary systems as a layer of fluid or a mixture of chemical products can generate, under appropriate conditions, a multitude of self-organization phenomena on a macroscopic scale -- a scale orders of magnitude larger than the range of fundamental interactions -- in the form of spatial patterns or temporal rhythms. 3) States of matter capable of evolving (states for which order, complexity, regulation, information and other concepts usually absent from the vocabulary of the physicist become the natural mode of description) are, all of a sudden, emerging in the laboratory. These states suggest that the gap between "simple" and "complex", and between "disorder" and "order", is much narrower than previously thought. They also provide the natural archetypes for understanding a large body of phenomena in branches which traditionally were outside the realm of physics, such as turbulence, the circulation of the atmosphere and the oceans, plate tectonics, glaciations, and other forces that shape our natural environment: or, even, the emergence of replicating systems capable of storing and generating information, embryonic development, the electrical activity of brain, or the behavior of populations in an ecosystem or in an economic environment. Adapted from: Gregoire Nicolis: in: Paul Davies (ed.): The New Physics. Cambridge University Press 1989, p.316 -------------------------------- Related Material: ON EVOLUTION AND COMPLEXITY The following points are made by N. Barton and W. Zuidema (Current Biology 2003 13:R649): 1) A central goal of evolutionary biology is to explain the origin of complex organs -- the ribosomal machinery that translates the genetic code, the immune system that accurately distinguishes self from non-self, eyes that can resolve precise images, and so on. Although we understand in broad outline how such extraordinary systems can evolve by natural selection, we know very little about the actual steps involved, and can hardly begin to answer general questions about the evolution of complexity. For example, how much time is required for some particular structure to evolve? 2) Complex systems -- systems whose function requires many interdependent parts -- are vanishingly unlikely to arise purely by chance. Darwin's explanation of their origin is that natural selection establishes a series of variants, each of which increases fitness. This is an efficient way of sifting through an enormous number of possibilities, provided there is a sequence of ever-increasing fitness that leads to the desired feature. To use Sewall Wright's metaphor, there must be a path uphill on the "adaptive landscape". 3) The crucial issue, then, is to know what variants are available -- what can be reached from where -- and what is the fitness of these variants. Is there a route by which fitness can keep increasing? Population genetics is not much help here. Given the geometry defined by mutation and recombination, and given the fitnesses, we can work out how a population will change simply by following the proportion of different types through time. But understanding how complex features evolve requires plausible models for the geometry of the adaptive landscape, which population genetics by itself does not provide. 4) "Artificial Life" -- the study of life as it could be --provides a variety of such models. For instance, Thomas Ray (1992) developed a model called "Tierra", where digital creatures are little computer programs that copy themselves and compete with each other for memory and processing time. Fitness here --just as in the real world -- is defined very indirectly by the rate of self-replication of the creatures relative to others. Ray's creatures evolved strategies to hinder competitors and even to parasitize other creatures. Karl Sims (1994) created a simulated physical world in which "digital creatures" successfully evolve both their bodies and brains in order to beat other creatures in a variety of tasks such as swimming, walking and jumping. Lipson and Pollack (2000), in a recent follow-up study, actually made such walking creatures as little robots and showed that the evolved locomotion strategies work even in the real world. Fitness in these models is defined implicitly by the complex relation between brain and body architecture and the resulting way of moving. Current Biology http://www.current-biology.com From checker at panix.com Tue Jul 26 19:45:10 2005 From: checker at panix.com (Premise Checker) Date: Tue, 26 Jul 2005 15:45:10 -0400 (EDT) Subject: [Paleopsych] SW: On Synesthesia Message-ID: Neuroscience: On Synesthesia http://scienceweek.com/2005/sw050722-2.htm The following points are made by C. Mulvenna and V. Walsh (Current Biology 2005 15:R399): 1) The term "synesthesia" refers to a phenomenon in which an individual experiences a sense other than the one being stimulated. This unusual pairing is automatic, present since childhood and consistent across time. A specific experience will be activated by the same stimulus with a seemingly arbitrary connection. For example, the sight of the letter "q" may always activate the experience of a deep red color; or a middle C played on a violin may always activate the experience of the taste of tuna. The pairings can be more complex for some synesthetes; for example, a sequence of pitches may activate the sensation of gold, yellow and white moving rapidly upwards and at an angle to the right, like a rippling stream . The condition is also referred to as "sensory cross-activation". 2) Synesthesia does not apply to forced or acquired associations, such as the word "Christmas" having connotations with the color red, the smell of mince pies, or the general sound of Christmas carols. It also does not include sensations triggering memories, such as a song eliciting the memory of a person or place. 3) The first known reference to synesthesia in scientific writing is John Locke's account of a blind man who described the color scarlet as the sound of a trumpet in 1690. Similar isolated case-studies continued for some time, and it was described in detail by Francis Galton in 1883. Since then synesthesia has suffered repeated waves of dismissal as a phantom condition, despite continual reports of its existence. It is only relatively recently, with the application of brain imaging techniques, that it has gained creditability in the scientific world as a genuine neurological condition, and this acceptance has led to the current surge in synesthesia research. 4) Sensory cross-activation in the brains of synesthetes has now been observed by positron-emission tomography (PET) and functional magnetic resonance imaging (fMRI). Activation of brain regions associated with visual perception was observed in blindfolded synesthetes listening to words that evoked visual experiences. These activations were shown to be clearly different from those evoked in either non-synesthetes or the same synesthetes listening to tones that did not evoke visual experiences. Activation of areas strongly associated with the perception of color was observed in a group of word-color synesthetes. This was not observed in non-synesthetes, even after they were trained to associate pairings of words with colors. Current investigations are examining if this neurological trend is observable across subtypes involving other senses. 5) One theory suggests that, rather than synesthesia being caused by extra connections "growing" between sensory areas, the apparent cross-activation could be a result of reduced apoptosis which aids differentiation of the sensory areas of the brain in the first months after birth. Because of this increased sensory connectivity, some experiences between certain senses in infancy may stay fixed in the brain. If this is the case, we were all synesthetes at one stage, but sensory modularity developed more explicitly in non-synesthetes.[1-5] References (abridged): 1. Baron-Cohen, S. (1996). Is there a normal phase of synesthesia in development?. Psyche. An Interdisciplinary Journal of Research on Consciousness. Volume 2, number 27. 2. Baron-Cohen, S., Burt, L., Smith-Laittan, F., Harrison, J., and Bolton, P. (1996). Synesthesia: Prevalence and similarity. Perception 25, 1073-1080 3. Baron-Cohen, S. and Harrison, J.E. (1997). In: Synesthesia: Classic and contemporary readings.. (1997). Cambridge, Massachusetts: Blackwell Publishers 4. Cohen-Kadosh, R., Sagiv, N., and Linden, D.E.J. (2005). When blue is larger than red: colors influence numerical cognition in synesthesia. J. Cogn. Neurosci., in press 5. Galton, F. (1883). Inquiries into human faculty and its development. London Press Current Biology http://www.current-biology.com -------------------------------- Related Material: THERMAL STIMULATION OF TASTE SENSATION The term "chemoreceptors" refers to biological cells specialized to respond to chemical stimuli, and the function of such a cell is to signal to the nervous system a change in the chemical environment. In humans, for example, major use of chemoreceptors occurs in those parts of the body specialized for taste (gustatory sense) and smell (olfaction). Taste receptors are found in the epithelium of the tongue, and these receptors are responsible for sour, sweet, salty, and bitter sensations from food applied to the tongue. Taste receptors are also found in the pharynx and the upper part of the esophagus. In contrast to olfactory receptors, taste receptors do not have their own output extensions (axons) to send signals to the central nervous system, but instead taste receptors stimulate the endings of nerve fibers that send input to the central nervous system ("afferent fibers"). Taste receptor cells are gathered into groups as "taste buds", and the sensing of taste stimuli occurs in finger-like projections (microvilli) at the surface of these taste buds, with various chemical mechanisms proposed to account for transduction of taste stimuli. In general, sourness depends primarily on the acidity of a chemical stimulus, and salty sensations are evoked by solutions with a high sodium concentration. Sweetness and bitterness, on the other hand, are apparently transduced by specific *receptor cell membrane receptors for sugars, amino acids, and other chemicals Threshold concentrations for taste sensations produced by most ingested substances are relatively high. For example, the threshold concentration for sodium chloride is approximately 10 millimolar, for sucrose, 20 millimolar, for citric acid 2 millimolar. The threshold is much lower for certain bitter-tasting potentially dangerous plant compounds: the threshold concentration for quinine is 0.008 millimolar, and for strychnine 0.0001 millimolar. In humans, approximately 4000 taste buds are distributed throughout the oral cavity and upper alimentary canal. Taste buds are approximately 50 microns wide at their base and approximately 80 microns long, each bud containing 30 to 100 taste receptor cells. Approximately 75 percent of all taste buds are found on the upper (dorsal) surface of the tongue. The following points are made by A. Cruz and B.G. Green (Nature 2000 403:889): 1) The authors point out that the first electrophysiological recordings from animal and human taste nerves (in 1935 and 1985 respectively) provided clear evidence of thermal sensitivity, and studies have indicated that as many as half the neurons in the mammalian taste pathways respond to temperature. Since temperature has never been shown to induce sensations of taste, it has been assumed that thermal stimulation in the taste system is somehow nullified. 2) The authors report, however, that heating or cooling small areas of the tongue can in fact cause sensations of taste: warming the front (anterior) edge of the tongue (which is innervated by the chorda tympani nerve) from an initially cold temperature can evoke sweetness, whereas cooling can evoke sourness and/or saltiness. Thermal taste also occurs on the rear of the tongue (which is innervated by the glossopharyngeal nerve), but the relationship between temperature and taste is different in that location from that found in the front of the tongue. 3) The authors suggest these observations indicate the human taste system contains several different types of thermally sensitive neurons that normally contribute to the sensory code for taste, and that although there is evidence for neurons whose chemosensitive mechanisms are temperature sensitive, thermal sensitivity in some taste neurons may arise from cellular processes unrelated to chemosensory transduction. Nature http://www.nature.com/nature -------------------------------- Notes by ScienceWeek: receptor cell membrane receptors: This phrase is a good illustration of the two uses of the term "receptor" in cell biology. Biological cells specialized to respond to specific physical or chemical stimuli are called "receptors cells", or merely "receptors". However, specific proteins or groups of proteins, embedded in the surface of a single cell, and which respond to interactions with specific ions, chemical groups, or molecules, and send molecular-level signals to the interior of the cell, are also called "receptors". -------------------------------- Related Material: CROSS-MODALITY SENSATION The following points are made by Alison Motluk (The Scientist 2001 11 Aug): 1) The classical and prevailing view of the brain holds that there are 5 separate senses feeding into 5 distinct brain regions genetically wired to handle one and only one sense each. Sensory information is thus parcelled up and analyzed in isolation. Recent research, however, demonstrates that people who are born blind use the visual cortex when they read Braille, and this has led to the idea that everyone has the capacity to use non-classical regions for the analysis of sensory information under certain circumstances, and that the brain is much more versatile than many researchers have believed. 2) The brain is apparently able to quickly recruit new areas for sensory analysis and also able to quickly reverse the recruitment, with a time-scale apparently too short to involve new connections. Tactile and auditory input into the visual cortex is apparently present in all people and can be utilized for analysis if behaviorally desirable. 3) Some researchers now believe that the brain is not organized into specific sensory modalities, but instead it is split into units with specific tasks or particular problems to solve, and these task-oriented or problem-solving units simply use the most relevant information available. The units may prefer certain senses at certain times and under certain conditions, and prefer other senses at other times. Vision, for example, may be the preferred way to judge distances, but in the absence of vision, hearing or touch sensation may be used to complete the same analysis. The Scientist http://www.thescientist.com From checker at panix.com Tue Jul 26 19:45:17 2005 From: checker at panix.com (Premise Checker) Date: Tue, 26 Jul 2005 15:45:17 -0400 (EDT) Subject: [Paleopsych] The Times: Put your sweet lips... Message-ID: Put your sweet lips... http://www.timesonline.co.uk/article/0,,592-1647622,00.html et seq. 5.6.11 A simple gesture that can express love and reverence -- or insult and betrayal. A kiss, Keith Thomas discovers, is never just a kiss Look at these people! They suck each other! They eat each other's saliva and dirt! -- Tsonga people of southern Africa on the European practice of kissing, 1927 In what must still be the longest single work devoted to the kiss -- Opus Polyhistoricum . . . de Osculis -- the German polymath Martin von Kempe (1642-83) assembled 1,040 closely packed pages of excerpts from classical, biblical, legal, medical and other learned sources to form a sort of encyclopaedia of kissing. He listed more than 20 types of kiss. These included the kiss of veneration, the kiss of peace, the kisses bestowed by Christians on images and relics, and by pagans on idols, the kissing of the Pope's foot, the kiss bestowed by superiors on inferiors, the kiss used in academic degree ceremonies, the lovers' kiss, the lustful and adulterous kiss, the kiss exchanged by couples sealing their marriage vows, the kiss of reconciliation, the kiss carrying contagion, the hypocritical kiss and the kiss of Judas. It would not be difficult to prolong von Kempe's list ad infinitum. For kisses take so many different forms. A kiss can be given in private or in public, by men to men, men to women, women to women, adults to children or children to each other. They can be unilateral or reciprocated. They can be on the lips, on the cheek or on any other part of the body. They can be blown in the air. A kiss can express deference, obedience, respect, agreement, reverence, adoration, friendliness, affection, tenderness, love, superiority, inferiority, even insult. There is no such thing as a straightforward kiss. The conventions governing the use of the kiss as a gesture of greeting or farewell have, for most historical periods, been established only in the most fragmentary outline, and then usually only for the upper classes of society. What sort of gestures, if any, were exchanged on meeting and parting by two 12th-century serfs? When, if at all, did an 18th-century collier's wife kiss her friends? These are not questions to which it is yet possible to give a confident answer. But the kiss does have a history. While psychologists and psychoanalysts tend to write as if kissing has a universal and unchanging meaning (for Freud, the erotic kiss is an attempted return to the security of the mother's breast), it is far from a universal practice. It seems to have played a less conspicuous part in either the ritual or the erotic life of most Asiatic, Polynesian or sub-Saharan societies, while in the West the norms and conventions governing its employment have, from the beginning, been constantly evolving. One could attempt to summarise this evolution by saying that the use of the kiss as a ceremonial means of expressing and cementing social, personal and political relationships has, during the past 800 years, tended to diminish, whereas its erotic significance has been increasingly emphasised. Since the days of the early church, Christians had exchanged a holy kiss of peace as a symbol of their unity in Christ. But in due course the male and female members of the congregation were segregated so as to avoid kissing between the sexes and, from the 13th century onwards, the members of the congregation began to kiss the osculatorium or pax-board rather than each other. In the 16th century Protestants omitted the kiss altogether. Other forms of ceremonial kissing also disappeared. At some point in the late medieval or early modern periods (14th to 18th centuries), the handshake, oath or written document superseded the kiss as the accepted symbols of reconciliation. In England, at least, kissing between males was unusual, other than within the family or in courtly ceremony. In his Troilus and Criseyde (late 1380s), Chaucer notably minimised the degree of physical contact between men which he found in the Italian source for his story. When Thomas Coryate, an experienced courtier, visited Venice in 1608, he thought it "an extraordinary custom" that two male acquaintances would "give a mututal kiss when they depart from each other, by kissing one another's cheek: a custom, that I never saw before, nor heard of, nor read of in any history". Meanwhile, the erotic meaning of the kiss became increasingly central. In 1649 an English observer could write that the kiss was used "in salutation, valediction, reconciliation . . . congratulation, approbation, adulation, subjection, confederation, but more especially and naturally in token of love". The mouth became more welcoming with the advent of more effective dentistry -- which did something to diminish halitosis and produce gleaming white teeth -- and the sexual connotations of the kiss became more apparent and its meaning more ambiguous. Eventually the ambiguity proved too much; and, for social and ritual purposes, the meeting of lips had to be replaced by other words and actions, less susceptible to misinterpretation. The English social kiss between men and women had been on the lips and therefore disappeared, whereas the French kiss on the cheeks was less blatantly erotic and accordingly proved more enduring. Any form of kissing between men similarly became objectionable once the idea of homosexuality had been clearly formulated. In 1626 the writer William Vaughan deplored the "unnatural kiss of man with man, a minion-kiss, such as Jupiter used to Ganymede his cup-bearer". Under the later Stuarts, French influence on courtly manners created a temporary fashion for foppish young men to kiss each other on the cheek, but in the 18th century men seen kissing were likely to be accused of sodomy. Affectionate kissing and touching between women friends and acquaintances lasted much longer, because the notion of lesbian love was slower to take root than was that of male homosexual desire. Even so, physical tokens of affection between women were thought more seemly if not exchanged in company or in the presence of men. This interpretation, according to which the sexual meaning of a kiss gradually drove out all other meanings, is open to two obvious objections. First, it glosses over the fact that the kiss had always had an erotic connotation. In ancient times as now, the lips were an erogenous zone, whatever cultural conventions may have implied. There were always problems about using so intimate an act for the purposes of public ritual -- that was why the early Fathers were worried about the kiss of peace -- and there was always scepticism and distrust of the Neoplatonic notion that kisses could unite souls without awaking a desire for the union of bodies. Medieval literature abounds in equivocal kisses, with lovers and lechers exploiting the social and religious conventions of the day to advance their own particular sexual agendas. The fine line separating social from sexual kissing caused much social anxiety and provided rich possibilities for drama, both comic and tragic. An English traveller in the 1770s to Scotland, where French-style social kissing was practised, remarked that "it very seldom happens that the salute is a voluntary one, and it frequently is the cause of disgust and embarrassment to the fair sex". http://www.timesonline.co.uk/article/0,,592-1647622_2,00.html This inherent potential for misunderstanding is what distinguishes the history of the kiss from that of most other gestures. It might be argued that it is a matter of indifference whether a hostess greets her guests by kissing them on the mouth, as in Tudor England; by shaking hands, as in mid-20th-century Britain; or by rubbing noses, as in Polynesia. They are all ways of making strangers welcome; and, so long as everyone concerned understands the conventions, it might seem of no consequence which one is employed. But what distinguishes the kiss from so many other multi-purpose gestures is that its sexual nature has always lent a potential ambiguity to its meaning. The second objection to the notion that the ritual kiss was subverted by the erotic kiss is that it implies that the trend was all in the same direction. In fact, there were periods when the movement was the other way. The early Middle Ages had seen the rise of the kiss, as it came to occupy an unprecedently central position in a wide range of secular and ecclesiastical rituals. The Romans, by contrast, did not employ even the social kiss until the period of the Empire, and then only among the aristocracy. The social kiss between men seems to have fluctuated according to fashion. The habit among elegant young males at the later Stuart courts of kissing each other has already been mentioned. "Sir, you kiss pleasingly," says one of them in an early-eighteenth-century play, "I love to kiss a man, in Paris we kiss nothing else." In the trenches of the First World War, the imminent threat of death could lead to a suspension of normal codes of behaviour and restore the male kiss as a non-sexual symbol of intimacy, while in the late 20th century, it has become commonplace for men on the football field to exchange kisses at moments of triumph. These developments are part of an altogether larger relaxation of bodily inhibitions which has occurred in the West since the 1960s. The social kiss and hug have returned, much to the embarrassment of middle-aged Britons, who have grown up accustomed to a far greater degree of bodily distance. The subject also has a medical dimension. For the attitude to kissing can change when breath and saliva are regarded as potential instruments of infection. The Roman Emperor Tiberius (AD14-37) issued a decree banning kissing, because it was believed to be responsible for the spread of an unpleasant fungoid disease called mentagra, which disfigured the faces and bodies of Roman nobles. Not that the avoidance of bodily contact was always so rational. Some bodily habits, which had been happily tolerated in one age, became wholly unacceptable in another. No one has ever exceeded the Roman epigrammatist Martial (late 1st century AD) in evoking the nauseous experience of having to kiss lips and faces covered with dirt, snot, ulcers and scabs. Thereafter there were many such complaints. The social and physical squeamishness of 18th-century doctors prevented them from adopting mouth-to-mouth resuscitation as a respectable medical practice, even though they were aware of its life-saving potentialities. In the same century, authorities on politeness condemned the practice of those who "put their faces so close to yours as to offend you with their breath" as a "horrid and disgustful habit". When aristocratic Romans of the imperial age took up the practice of kissing friends and clients, they perfumed their breath with myrrh. How far, one wonders, have modern dentistry and breath-sweeteners been a precondition of the return of the social kiss in modern times? Edited extract from the afterword to The Kiss in History, edited by Karen Harvey (Manchester University Press, ?15.99, offer ?12.79 plus ?2.25 p&p, call 0870 1608080) Sir Keith Thomas is the author of Religion and the Decline of Magic (Penguin) From checker at panix.com Tue Jul 26 19:45:26 2005 From: checker at panix.com (Premise Checker) Date: Tue, 26 Jul 2005 15:45:26 -0400 (EDT) Subject: [Paleopsych] WP: The 'Bad' Guy Message-ID: The 'Bad' Guy http://www.washingtonpost.com/wp-dyn/content/article/2005/06/20/AR2005062001458_pf.html Steven Johnson Thinks Video Games And Violent TV Are Good for the Brain By Bob Thompson Washington Post Staff Writer Tuesday, June 21, 2005; C01 NEW YORK There seem to be two Steven Johnsons. And at this particular moment, it's hard to believe they're the same guy. There's Steven Johnson, Swell Dad, the one who sits across the table from you in his Brooklyn dining room and politely interrupts your conversation to commune with a way-cute toddler who's dashed in bearing bottled water and news from the outside world. "Hi, Rowan! Oh, thank you, that's very helpful. Was it hot outside, buddy?" he says. Then there's Steven Johnson, Parents' Nightmare. This is the guy who's been parading around calling video games like Grand Theft Auto and TV shows like "24" brain food for your kids. He's the provocateur who titled his most recent book "Everything Bad Is Good for You: How Today's Popular Culture Is Actually Making Us Smarter" -- a deliberate "nana-nana-boo-boo" to the Books Are Better crowd. "The most debased forms of mass diversion -- video games and violent television dramas and juvenile sitcoms -- turn out to be nutritional after all," Johnson writes. They offer an increasingly rigorous "cognitive workout." What's more, the mental skills they hone "are just as important as the ones exercised by reading books." Well. You're a parent of two teenagers who has spent years trying to reduce their exposure to the addictive, sexualized, violent and relentlessly commercial output of the Great American Pop Culture Machine. (They've turned out just fine, thank you, despite never owning GameCubes.) You've read "Everything Bad" and found it smart and stimulating but also utterly infuriating. Twelve pages from the end, you've hit a passage so annoying it made you want to schlep up to Brooklyn and fling Johnson's argument back in his handsome, smiling face. Which would be a lot easier if he weren't such a likable guy -- and if that charming child of his didn't keep getting in the way. 'Almost Like a Life Form' Johnson's championship of popular culture comes with a significant irony: If he'd been born just a couple of years earlier than 1968, he'd likely be teaching "Middlemarch" to undergraduates today. He grew up in Glen Echo, the son of a lawyer and a health-care advocate. From the beginning he was a huge reader, the kind of kid who starts thinking at age 9 that he wants to be a writer. As an undergraduate at Brown, he majored in semiotics. As a grad student at Columbia, he studied English lit. "I sat there reading 75 19th-century novels when I was 24," he says, laughing, "and it's a huge part of who I am." But Johnson was something else as well. He was the kind of kid who'd sit in his room for hours playing baseball simulation games -- the pre-electronic variety, which featured sets of dice and sheaves of complex statistical data. He was the kind who, frustrated by the flaws he found in these simulations, went ahead and designed his own. When the first electronic games appeared, he played them, too. Still, he was no obsessive -- until Myst and Sim City came along. This was in the mid-'90s, during his grad school days. Exploring the vivid worlds of the new games "was, like, oh my God, I feel like I fast-forwarded 10 years." The second version of Sim City, in particular, gave him the feeling that the urban landscape he was shaping on his computer screen was "almost like a life form." If a single game could come alive that way, what would a whole computer-connected world be like? It was the perfect question for a tech-loving guy who wanted to write. Goodbye, "Middlemarch"; hello, Feed magazine. Johnson never finished his dissertation. Instead, he helped start one of the first significant general interest publications online -- joining others in his tech-savvy generation who "saw opportunity where a lot of other people saw something confusing and scary," as Feed co-founder Stefanie Syman once explained. The magazine had a classic new-economy roller-coaster ride. It was hand-to-mouth at first, Johnson says: "Oh, we raised $20,000! Oh, we can pay the rent!" A few years later, having added a couple of other Web ventures, his company was doing well enough to raise a few million dollars and hire a real CEO -- just in time for the market to crash. Feed was history. But Johnson came out just fine. The magazine had helped establish him as a chronicler of the networked world. He had one book out and another poised for publication. We're not talking thrillers here: Johnson has made a career of popularizing the complex. In "Interface Culture" (1997), he explored the idea that because we're now sharing so many communal spaces online, interfaces and the folks who create them are hugely important. In "Emergence" (2001), he looked at self-organizing systems in everything from ant colonies to computer simulations. In last year's "Mind Wide Open," he offered a lively tour through the workings of the brain. For years, meanwhile, he had been charting the "incredible growth in complexity and challenge" of those video games that non-gamers still thought of as moronic, immoral or both. Then he thought: Isn't television evolving the same way? Weren't shows like "The Sopranos," "The Simpsons," "Seinfeld" and "24" demanding more of their viewers at the same time the tube was under attack for producing sleazy, lowest-common-denominator fare? "It occurred to me that there was a bigger argument to be made," Johnson says. The author and his wife talked about steeling themselves for a negative reaction -- because if "Everything Bad" did its job, it was going to make some people mad. Sure enough, there's a mad person sitting in their dining room right now. 'Exploring an Environment' You want to start shouting at him right away. What about the stuff "Everything Bad" ignores? What about all that sex and violence you don't want your kids exposed to in second grade? Or the highly addictive nature of video games? Or the toxic sea of commercialism in which all that televised complexity must float? What does Johnson really mean when he says these things are "good for you"? But it would be best, perhaps, to start with some points of agreement. Okay. It's true, as Johnson says, that video games can be intensely challenging and absorbing, and that book-loving snobs tend to be oblivious to this fact. It's true that "The Sopranos" is complicated and subtle as well as violent. And although you yourself don't watch "24," your smart colleagues talk endlessly about its intricate plotting. What's more: You love how comfortable your kids are with new technology. You totally agree that "the ability to take in a complex system and learn its rules on the fly is a talent with great real-world applicability." Maybe they can support you in your old age! In fact, if you ignore the absurdly sweeping assertion of Johnson's title -- and hey, he says, if you can't see that "Everything Bad Is Good for You" is winking at the reader, we've really got a failure to communicate -- his core argument seems reasonable enough. To summarize briefly: He's talking trends, not absolutes, and over the past 30 years, the trend in both video games and television shows has been toward forms that are more cognitively demanding. (He doesn't dwell on the Internet, which he thinks needs little defense.) Why the upward trends? When it comes to gaming, Johnson invokes some of the neuroscience he studied for his last book. Human brains are drawn to systems, he suggests, in which "rewards are both clearly defined and achieved by exploring an environment." The exploration part is key: Gamers have to figure out the rules as they go along, and "no other pop cultural form directly engages the brain's decision-making apparatus" the way video games do. With television, Johnson's argument rests more on economics. Complex narratives that "force you to work to make sense of them" have been rewarded by a marketplace where profit now depends heavily on repeat performances, whether on DVD or in syndication. Making shows more challenging to decode makes perfect sense if you're assuming they'll be watched more than once. Games aren't "Hamlet" or "The Great Gatsby," Johnson writes; they're more like mathematical logic problems. As such, "they are good for the mind on some fundamental level: They teach abstract skills in probability, in pattern recognition, in understanding causal relations that can be applied in countless situations, both personal and professional." But, but, but . . . Too many questions get played down -- or left out entirely -- as Johnson argues his case. So you ask: What about all that sex and violence? It's not just the Parents Television Council that thinks the entertainment industry "has pushed the content envelope too far." Does Grand Theft Auto have to make people smarter by rewarding them for killing prostitutes? Johnson doesn't quite answer this directly. "I feel like the values questions, the violence questions, all those kind of content questions that I kind of put off to the side, I don't put off to the side because they're irrelevant," he says. Violence "is absolutely a legitimate thing to talk about." Take "24," for example. "I think it's a brilliant show on a whole host of levels, but the torture in it is really offensive." Why not talk about it, then? "Because we're only focusing on these other issues, and I think the cognitive stuff is as important. . . . What I'm trying to say is, let's put those questions aside and continue having that debate, but let me introduce you to this whole major story line that you haven't heard. And if you don't have that story line, then you can't make informed decisions about what your kids should be doing." So how about the addiction thing? "Everything Bad" sets out to explain why gamers willingly spend hours on tasks that seem, at least to a non-gamer, intensely boring. "The power of games to captivate involves their ability to tap into the brain's natural reward circuitry," Johnson writes. A bit later he acknowledges a small problem. "You might reasonably object that I have merely demonstrated that video games are the digital equivalent of crack cocaine." Well, yeah, you bet, but when you do object, here comes that positive story line again. Can't constantly gaming kids become addicted? "Absolutely. No question about it," Johnson agrees. But he says the brain's craving for rewards, like the Force in "Star Wars," can be used for good as well: "You can get them to do things much more challenging mentally than what I was doing when I was sitting around watching TV" as a kid. Speaking of which: Some parents (you're one) object as much to television advertising as to the shows themselves. We don't want our kids constantly being told that buying stuff is the key to feeling good about themselves. Johnson's solution to this? Well, it turns out he and his wife don't watch much regular TV. "We started watching all these TV shows on DVD," he says -- "Six Feet Under," "24," "The West Wing" -- "which is the most beautiful way to watch them, because you get to see the long format narrative at its best." Right now they're watching "Lost" with the aid of their TiVo machine, which also allows them to skip the commercials. Give him credit for consistency: Johnson doesn't stop at saying these widely praised, long-form TV dramas are more challenging than they used to be. Even reality television, he maintains, is better for viewers than old-time game shows or "Mork & Mindy." Why? Because it enhances the viewers' emotional intelligence by getting them to "analyze and recall the full range of social relationships in a large group." Oh, please! Wouldn't they learn faster by turning off the tube and interacting with actual human beings ? "Yes. Right. Exactly right," he says calmly. But if you assume "people are going to spend some amount of their time in front of screens . . ." Not assuming that, apparently, isn't an option. Time to bring up the passage that so maddened you when you came across it, near the end of Johnson's hymn to pop culture. The one in the section beginning "Now for the bad news." It's true, he finally admits, that "a specific, historically crucial kind of reading has grown less common in this society: sitting down with a three-hundred-page book and following its argument or narrative without a great deal of distraction." It's true that video games and TV do a poor job of "training our minds to follow a sustained textual argument or narrative" -- at least one that "doesn't involve genuine interactivity." But not to worry: "We still have schools and parents to teach wisdom that the popular culture fails to impart." When you first read this sentence, all you could think was: Thanks a lot, pal! No problem! We'll just drag 'em away from those Xboxes and whup 'em into shape! You convey this reaction to the sentence's author. Just what does he think schools and parents are competing with, you ask. He is unfazed. After all, he has made his argument in a 200-page book. "Middlemarch," too, will doubtless survive, he says. 'He Seems Really Into Books' "Hey, little man," Johnson says. Here comes that cute toddler again, heading straight for Dad, paying not the slightest attention to the matching PowerBooks that sit on the counter nearby. He will soon enough, though. His brother already does. Not quite 4 yet, the older boy "pops down in the morning, goes up on that chair, turns on that computer, pulls down his user account, types in his password, watches the Web browser, goes to Sesamestreet.org and starts playing these little interactive games," Johnson says. So before he has started to read and write, he's already got a life on the screen. Does this worry his father? It does not. "He seems really into books," Johnson says. "Being read books is a crucial part of his life." And never mind that they read him "mainly programming manuals." Joke! That's a joke! The man is pulling your leg! The kid is a huge Curious George fan! In truth, Johnson's life appears far more balanced than his book. Yes, there are two laptops on the counter, but right next to them hangs a Barnes & Noble tote bag with a portrait of Charles Dickens on it, and nearby sits a copy of Ian McEwan's "Saturday," which Johnson recently read (and loved). Yes, there's a giant, flat Philips Widescreen mounted on the living room wall, but it's surrounded by shelves and shelves of books, among them "Complexity: The Emerging Science at the Edge of Order and Chaos," by Mitchell Waldrop, and "The Golden Bowl," by Henry James. "Everything Bad Is Good for You" was deliberately written as a polemic, Johnson says, and he knows perfectly well it's one-sided. He could have written a longer, more balanced book that said, "Here's an overall assessment of the entire state of today's culture," he says, "but that's the kind of book that nobody listens to." People have been listening to his, at least to judge from the continuing flow of media requests: He's been doing interviews for weeks, and as he speaks, in early June, he's about to go on "The Daily Show" and Jim Lehrer's "NewsHour." What's more -- despite his fears, and present company excepted -- he's been surprised at how positive the responses have been. Ah, but you've got a theory of why that is. Call it the Red Wine Syndrome. Take something that's known to be wildly destructive when taken in excess: something that can wreck your liver, destroy your family, create bloody mayhem on the highway and turn you into a pathetic, falling-down wretch. Then have some scientists announce that, taken in moderation , this thing can . . . prevent cancer! If you're a drinker who's sick and tired of being scolded, you're going to be pretty excited about this news. Johnson laughs. He's heard this kind of connection made before. "A few desperate people," he says, "are, like: ' Please tell me that I can smoke again!' " From checker at panix.com Wed Jul 27 20:39:19 2005 From: checker at panix.com (Premise Checker) Date: Wed, 27 Jul 2005 16:39:19 -0400 (EDT) Subject: [Paleopsych] Moscow Times: No Laughing Matter Message-ID: No Laughing Matter http://context.themoscowtimes.com/print.php?aid=154775 [2]Tiny Revolutions in Russia: Twentieth-Century Soviet and Russian History in Anecdotes By Bruce Adams RoutledgeCurzon, 173 Pages. $97 The Soviet police state was probably one of the least funny regimes in history, but the jokesters collected in Bruce Adams' book didn't see things that way. By Carl Schreck Published: July 22, 2005 Educated as an engineer, Klava was working as a manicurist to pay the bills while she waited for permission to leave Brezhnev's Soviet Union. She was tending to her customers' fingernails one day when a longtime client sauntered in and tried to move to the head of the line. Klava explained that she would have to wait like everyone else, and while she waited, Klava and her customers swapped political wisecracks subverting popular Party slogans: "When we say Lenin, we mean Party," they said. "When we say Party, we mean Lenin. And this is how we deal with everything. We say one thing, we mean something else." After Klava had finished with her other customers, the unscheduled client told a joke of her own: "There was a competition for the best joke about Lenin. And the first prize is 10 years to where Lenin used to go," meaning jail or exile. The client, it turned out, was a KGB agent. "If I did not value you as my manicurist, I would send you for 10 years to where Lenin used to go," she said. Klava was eventually allowed to emigrate, and the story appeared in an article by anthropologist Elliott Oring last year. Still, the irony of a KGB employee threatening the seditionist with a seditious joke beautifully embodies not only the prevalence of political jokes in Soviet society and the dangers associated with such humor, but the cruel, arbitrary nature of the Soviet regime. Jokes, or anekdoty, were indeed risky business in the Soviet Union, Bruce Adams maintains in the introduction to "Tiny Revolutions in Russia," his light if thoroughly entertaining recap of Soviet history told through a mix of amusing, tragicomic, baffling and plain unfunny jokes that will strike a familiar chord with any foreigner who has shared a couple bottles of vodka with a table full of Russians. George Orwell was the first to dub jokes "tiny revolutions," but it's an especially fitting title for Adams' book, which reminds us that humor can have very serious consequences when the joke is on a totalitarian regime. The eight years Nobel laureate Alexander Solzhenitsyn spent in prisons and labor camps came as punishment for jokes he had made about Josef Stalin in his private correspondence, Adams writes. "The anecdotes were necessarily underground humor shared only with close friends." The meat of "Tiny Revolutions" is divided into six chapters devoted to leaders from different eras. Vladimir Lenin, Stalin, Nikita Khrushchev and Leonid Brezhnev each get their own chapter, while Yury Andropov and Konstantin Chernenko, the infirm symbols of the crumbling Soviet gerontocracy, are forced to share, as are Mikhail Gorbachev, Boris Yeltsin and Vladimir Putin. Embedded in Adams' historical accounts of each period, the jokes address the absurdities of Soviet life and take down the vanguard of the world revolution a notch or two. Viktor Bogorad Soviet and Russian jokes have often been subversive, mocking elites from Lenin and Stalin to the New Russians of the 1990s. The Lenin years, Adams explains, marked the first appearance of Rabinovich, a staple Jewish character who can never quite find his place in Soviet society, despite the fact that anti-Semitism was supposed to have disappeared on the road to communism: "When no African delegates showed up at a Comintern Congress, Moscow wired Odessa [a very cosmopolitan port city with a large Jewish population]: 'Send us a Negro immediately.' "Odessa wired right back: 'Rabinovich has been dyed. He's drying.'" Political jokes naturally continued in the Stalin era, lampooning everything from shortages (especially of food), the First Five-Year Plan and grandiose construction projects like the White Sea-Baltic Canal, which saw hundreds of thousands of convicts dig 227 kilometers of canal "with primitive tools in horrible conditions." Jokesters appear to have been included in the work force: "'Who built the White Sea-Baltic Canal?' "'On the right bank -- those who told anecdotes, on the left bank -- those who heard them.'" The funniest jokes in "Tiny Revolutions" are from the Khrushchev and Brezhnev periods, not only because both leaders were ripe for mockery, but also thanks to the political thaw that followed Stalin's death. Adams dubs the era "the golden age of the anecdote." The famously bald Khrushchev's meddling with agriculture policy is best summed up by one joke from the "Armenian Radio" genre, in which a quick-witted radio operator from Yerevan sticks it to the bosses in Moscow: "'What is Khrushchev's hair-style called?' "[Armenian Radio]: 'The harvest of 1963.'" Similarly, the Brezhnev-era jokes tend to ridicule the increasing senility of His Eyebrowness: "Brezhnev begins his official speech opening the 1980 Olympics: 'O! O! O!' "His aide interrupts him with a whisper: 'The speech starts below, Leonid Ilich. That is the Olympic symbol.'" But the Brezhnev chapter also includes a few gems about the rampant paranoia of foreign spies and insidious Western propaganda: "Because the BBC always seemed to know Soviet secrets so quickly, it was decided to hold the next meeting of the Politburo behind closed doors. No one was permitted in or out. Suddenly Kosygin grasped his belly and asked permission to leave. Permission was denied. A few minutes later there was a knock at the door. A janitress stood there with a pail: 'The BBC just reported that Aleksei Nikolayevich shit himself.'" In a sharp departure from the earlier chapters, the funniest jokes from the Gorbachev, Yeltsin and Putin section are not political at all, but instead aimed at gaudy and ruthless New Russians. Take, for instance, the New Russian equivalent of "Why did the chicken cross the road?": "Two new Russians meet on a Paris street. 'Look at this,' brags the first, 'I just bought this Pierre Cardin tie for $300.' "'Big deal,' retorts the other, 'I got the same tie yesterday for $500.'" The jokes in Tiny Revolutions are hit-or-miss throughout, but the final chapter is somewhat anticlimactic. According to Adams, this is because the political anecdote has essentially become obsolete due to increased employment, a booming stock market and the declining rate of poverty. It's not a particularly convincing causal link, nor an easy idea to explain in just three paragraphs, which is all he devotes to this provocative subject. "Tiny Revolutions" offers too cursory an account of 20th-century Russia to be considered an authoritative work of history, and with less than 800 jokes, it's not exactly a comprehensive anthology. To use basketball parlance, one might compare it to a "tweener" -- a player too small to play under the basket yet lacking some of the requisite skills to be effective from the perimeter. But the happy medium that Adams strikes is exactly why his book works so well. Despite a standard-issue academic binding that threatens to induce sleep faster than a handful of Imovanes, "Tiny Revolutions" deserves a better fate than to be relegated to dust-collecting duties in Eastern European Studies sections of university libraries. From checker at panix.com Wed Jul 27 20:39:28 2005 From: checker at panix.com (Premise Checker) Date: Wed, 27 Jul 2005 16:39:28 -0400 (EDT) Subject: [Paleopsych] Guardian: (Sen) Beyond the call centre Message-ID: Beyond the call centre http://books.guardian.co.uk/print/0,3858,5229887-99937,00.html 5.7.5 Nobel laureate Amartya Sen offers a brilliant corrective to the myths surrounding his homeland in The Argumentative Indian, says Soumya Bhattacharya The Argumentative Indian: Writings on Indian History, Culture and Identity by Amartya Sen Allen Lane ?25, pp409 This needs saying at the outset. In itself, it might seem like an unremarkable fact, but it actually is not: Amartya Sen is a citizen of India. While most of his countrymen who have been able to leave India for a long time try their best to become citizens of the country they might have gone to (Britain, America, Canada, Australia), Sen, a man whom Cambridge and Harvard are said to have fought over for the privilege of offering an appointment, resolutely retains his blue Indian passport after half a century of towering intellectual achievement across the world. Every year, the 1998 winner of the Nobel Prize for economics returns to Santiniketan, the tiny university town 100-odd miles from Calcutta. In Santiniketan, the former Master of Trinity College, Cambridge, can be seen on a bicycle, friendly and unassuming, chatting with the locals and working for a trust he has set up with the money from his Nobel Prize. One of the most influential public thinkers of our times is strongly rooted in the country in which he grew up; he is deeply engaged with its concerns. There can, then, be few people better equipped than this Lamont University Professor at Harvard to write about India and the Indian identity, especially at a time when the stereotype of India as a land of exoticism and mysticism is being supplanted with the stereotype of India as the back office of the world. In this superb collection of essays, Sen smashes quite a few stereotypes and places the idea of India and Indianness in its rightful, deserved context. Central to his notion of India, as the title suggests, is the long tradition of argument and public debate, of intellectual pluralism and generosity that informs India's history. One of the book's many triumphs is its tone. Sen does not indulge in triumphalism about his country's past; nor does he spare Western influences (like James Mill's History of British India) that have oversimplified and distorted the Indian reality. While talking about Indian democracy, for instance, he cautions: 'It is important to avoid the twin pitfalls of 1) taking democracy to be just a gift of the Western world that India simply accepted when it became independent, and 2) assuming that there is something unique in Indian history that makes the country singularly suited to democracy.' The truth is far more complex and somewhere between these two views. Sen refutes the facile Western description of India as a 'mainly Hindu country' with the same rigorous scholarship that he demolishes the isolationist, circumscribed view of Hindutva held dear by the Hindu right that ruled India between 1999 and 2004. Illuminated with examples from the teachings and lives of emperors such as Akbar and Ashoka, with illustrations from the epics, The Ramayana and The Mahabharata, and a staggering range of other references, he propounds a view of Hinduism as an inclusive philosophy rather than an exclusionist, divisive religion. This view of Hinduism is mature enough and magnanimous enough to accommodate dissenting views and 'even profound scepticism'. This is a 'capacious view of a broad and generous Hinduism, which contrasts sharply with the narrow and bellicose versions that are currently on offer, led particularly by parts of the Hindutva movement'. Sen's tone is heartwarmingly celebratory in two essays, which talk about two figures who are exemplars of the heterodoxy that reflects the best of the Indian tradition. One essay is a spirited tribute to Rabindranath Tagore, India's only Nobel Laureate in literature. 'When poets die,' wrote Martin Amis in his defence of Philip Larkin, 'there is usually a rush to judgment: a revaluation, a retaliation - a reaction.' In the case of Tagore (poet, novelist, short story writer, essayist, playwright, educationist but chiefly known in the West as a poet), the revisionism happened very much during his lifetime. The early advocates of his work, among them Ezra Pound and WB Yeats, went from championing him to deriding him and Tagore's reputation lapsed into oblivion outside his own country before long. Sen dwells on the untranslatability of Tagore's work but argues that that was only part of the problem. Yeats and Pound and the others denounced Tagore because they could not, after a while, fit him into the exotic, spiritual Eastern pigeonhole in which they had put him. They wanted a mystic, a sage, and they missed altogether the point of the liberal, rationalist, humane thinker. The second essay concerns the cinema of Satyajit Ray, one of India's greatest film-makers. Ray, whose debut film, Pather Panchali (Song of the Little Road), celebrates its golden jubilee this year and endures still as an outstanding, poignant and relevant movie, was more than merely a film-maker. He was an extraordinarily gifted writer, artist and composer, seamlessly moving between the worlds of Western and Indian classical music. While his films have won many awards at festivals at Cannes, Venice and Berlin, in no way did he ever make his films pander to a preconceived notion of the Orient. His films are authentically Indian, rooted in the province of their origin (Bengal) and indisputably great. Sen argues that Ray achieved this synthesis by drawing on the heterodox tradition of India; he was willing to learn from other cultures and was able to blend that knowledge with what he had imbibed from his own. 'In our heterogeneity and in our openness lies our pride, not our disgrace. Satyajit Ray taught us this, and that lesson is profoundly important for India. And for Asia, and for the world.' This is a book that needed to have been written. The perception of India in the West and, indeed, among Indians themselves has never been more amorphous as it is now. The Argumentative Indian will provide a new dimension and perspective to that perception. It would be no surprise if it were to become as defining and as influential a work as Edward Said's Orientalism. From checker at panix.com Wed Jul 27 20:39:38 2005 From: checker at panix.com (Premise Checker) Date: Wed, 27 Jul 2005 16:39:38 -0400 (EDT) Subject: [Paleopsych] World Policy Journal: Europe: Paradise Found? Message-ID: Europe: Paradise Found? http://www.worldpolicy.org/journal/articles/wpj05-1/Gilbert.html Volume XXI, No 4, Winter 2004/05 Mark Gilbert * Last year, Europeans were from Venus, Americans were from Mars. Or so said the neoconservative commentator Robert Kagan, who published an influential pamphlet that depicted Europeans as lotus eaters in a Kantian paradise or, more accurately, an artificial Eden that only existed because eagle-eyed American sentries were manning the walls.^1 At a time when France and Germany were vocal in their condemnation of the war against Iraq, the book struck a chord. What right had Europeans to predict that Iraq would descend into postwar chaos? Had they no faith in America's can-do spirit? They should take off their philosophers' robes and put on combat fatigues if they want to criticize. This year, everything is fine in paradise. Jeremy Rifkin, a prolific writer on contemporary social trends, argues in a recent book, The European Dream, that Europe's vision of the future is "quietly eclipsing" the American dream and that the European Union (EU) is the prototype of a new form of governance ideally suited for a world of complex interdependency in which states are no longer the principal actors.^2 Europe, not America, is the political model for the future. And besides, those guys really know how to live: "People still stroll in Europe." Kagan and Rifkin's books are at bottom an argument about history. For Kagan, Europe has opted to stay aloof from history, leaving to the United States the dirty work of dealing with the rogue states and power-hungry dictators of the world. Future textbooks will have pages and pages on the war on terrorism; the slim chapter on the European Union will gently sneer at the time and the effort Europe dedicated to negotiating the cod quota. For Rifkin, Europe is making history, may even be the end of history. The EU, he says, is the "first governing experiment in a world metamorphosing from geographic planes to planetary fields." The phrase is opaque, but read in context it is perfectly clear. The EU is the shape of things to come. According to Rifkin, Europe has become nothing other than a "giant freewheeling experimental laboratory for rethinking the human condition and reconfiguring human institutions in the global era." The United States, by contrast, with its gas-guzzling trucks and irrational attachment to sovereign rights, especially its own, appears in Rifkin's book as one huge damaging hangover from the Enlightenment. The United States is a nation whose public mind is still conditioned by the property-obsessed doctrines of Hobbes and Locke. The American Dream is a dream of domination, and just as Americans have raped their natural environment in pursuit of material gain, so the United States imposes itself upon the international environment, with a foreign policy based upon the massive projection of brute force. The American Dream, nowadays, is "largely caught up in the death instinct." Rifkin means by this "the frantic desire to live and prosper by killing and consuming everything around us." One has to add, sic. The "European Dream," Rifkin concludes, is "a beacon of hope in a troubled world." The dream is that Europe will usher in a "second enlightenment," based upon the values of ecological sustainability, community, social cohesion, and universal human rights. This, at any rate, is what the EU is inching toward, though, of course, even Europeans do not always live up to the values they espouse. The rest of the world is watching events in Europe closely to see if a postnational and "postmodern" community based upon these values will emerge. If Europe does succeed, Rifkin argues that the European Union will displace the United States as the world's favored political and economic model and its influence will grow as a result. Its dream "will become an ideal for both West and East to aspire to." An Introspective Giant The weight of expectation placed upon the EU in Rifkin's book verges on the ridiculous. If the EU is a beacon for humanity, it is a distinctly smoky one, not a burning flame. Far from being a new Athens, the EU, which is now enlarged to 25 states, with Bulgaria, Croatia, Romania, and Turkey knocking at the door, is beset with organizational difficulties and with deep doubts over its scope and purpose. Reading Rifkin's book one would never grasp either that the EU's new constitution, which was finally signed on October 29, 2004, was very much a lowest-common-denominator deal that ensured the supremacy of the member states in the policymaking process, or that the constitution is extremely unpopular in a number of member states traditionally favorable to European integration. France, for instance, may fail to ratify the constitution; if so, the constitution will be dead in the water. One would not grasp the current state of uncertainty over Europe's institutional future because Rifkin attributes little importance to the role played by the member states, especially the big member states, in the EU's decision-making process. Like many other commentators, Rifkin portrays the EU as an exemplar of "multi-level governance." What this means is that the EU is government by networks, not by the centralized top-down model common to traditional nation-states. EU policy is supposedly the result of a plethora of interactions between the European Commission and Parliament, civil society groups, business and corporate actors, government bodies at the regional and provincial as well as the national level, academic consultants, and so on. For an information-age guru like Rifkin, this "polycentric" approach to policymaking is simply more in tune with the changing dynamics of contemporary life, with the need to "cope with a continually changing present." Here, however, Rifkin simply ignores the fact that the strategic decisions affecting the EU's future are taken, usually by unanimity, by the member states meeting in the European Council. While it is true that the implementation of much EU policy is characterized by the methods Rifkin eulogizes, his picture of the EU as a hive of participatory policymaking, with political elites and civil society action groups constantly finding new policy syntheses, is more than a little idealized. The EU remains in most important ways an organization of nation-states whose primary form of governance is traditional diplomacy. If anything, 2004 saw a shift toward more assertive leadership by the principal member states. Convinced that the EU's decision-making structures are in fact hopelessly cumbersome, Britain, France, and Germany have begun to coordinate their positions during regular summit meetings of their leaders and chief ministers. But this move aroused outspoken opposition in Italy, Poland, and elsewhere, and it is unlikely to prove a recipe for long-term creative statesmanship. At least the values of the EU correspond to Rifkin's sketch? Well, maybe. Europeans do have more sense of community than Americans; they are also more methodical about sorting their trash. They do enjoy what Rifkin calls "deep play," a formulation that offends residual puritan sentiment less than the more beautiful Italian expression dolce far niente. And they often do walk or cycle to work. The EU, so far as it can, sensibly encourages all these good things. Certainly, it is true that the "Lisbon process" launched in 2000 with the purpose of making Europe the world's most competitive economy by 2010 also insisted that Europe's would-be growth spurt should be sustainable environmentally. Slash-and-burn economics are not on the EU agenda. Even so, it is misleading to pretend that such socially motivated values are the main issues that the EU deals with. The EU is less concerned with the promotion of worthy social goals than with sharing out hard cash. Over the next decade or so, much of Europe's leaders' time will be dominated by budget issues. Enlargement to the new democracies of Central and Eastern Europe promises to stress the EU's finances. Almost all the countries joining the EU possess substantial farming sectors and inferior levels of infrastructure and hence will have first claim on the EU's resources. Since almost nobody is talking of expanding the EU budget much beyond its current level of just over 1 percent of the European Union's gross domestic product, transferring resources to the east can only come at the expense of the current beneficiaries. Already, the idea is circulating that Britain might give up the automatic "rebate" on budget contributions won by Margaret Thatcher in the historic 1984 Fontainebleau summit. Spain, Portugal, Greece, and Italy are going to have to get used to paying for more of their own infrastructure. French farmers may even have to adjust to a more market-based farm economy. These developments have the potential to throw the EU back to the early 1980s, when the then European Community (EC) ground to a halt as a result of the conflict caused by out-of-control agriculture spending running into the Thatcher government's unwillingness to act as the EC's paymaster. EU officials are fond of talking of the European "family" of nations, but like most families, the EU is at its worst when it talks about money. Over the coming decade, the family's worst face is likely to be on regular display to the rest of the world. The budget issues will be all the more acute because the high-cost welfare states of the EU will face gigantic demographic challenges over the next 20 years. Unlike the United States, the European Union has a rapidly aging population. Its median age will rise to over 50 by the middle of this century, unless something remarkable happens to the birthrate. Europe, as the Financial Times columnist Martin Wolf has put it, is in danger of becoming "a vast old people's home." Immigration may do something to buck the trend, but in a continent where one French citizen in five voted for the right-wing extremist Jean-Marie Le Pen in the 2002 presidential elections and where far-right movements thrive in Austria, Belgium, Germany, and Italy, encouraging immigration, especially Muslim immigration, is fraught with political risk. In this regard, the reaction in Holland to the brutal killing by an Islamic fundamentalist of a controversial Dutch filmmaker, Theo van Gogh, in early November 2004 is instructive. Van Gogh's death provoked numerous acts of vandalism and attempted arson against mosques throughout the Netherlands. That such cultural tensions should be emerging even in Europe's most liberal and tolerant societies is an ominous sign for the continent's future harmony. Even Rifkin almost quails at the thought of the challenges posed to the EU by demography and right-wing populism. The "success or failure" of the European Dream, he says, depends upon "how the current generation of Europeans address the issues of fertility and immigration." The ongoing debate over the entry of Turkey into the EU, which is saturated with often unspoken fears about a new surge in migration, seems to imply that Europeans will embrace enhanced levels of multiculturalism only with great reluctance.^3 In all probability, Europe will address these massive problems by muddling through. Welfare states will be reformed gradually, but as the experiences of France, Germany, and Italy have shown over the last two years, the political costs of even relatively cosmetic reductions in cherished social programs are so dire that only leaders with a death wish will push for radical change. The share of Europe's wealth spent by the state will therefore remain high, as will, inevitably, taxes. Graying citizens, worried for their future and their children's, will save even more than they do already for their old age, thus depressing economic growth and limiting Europe's ability to act as an alternative engine for the world economy. So far as immigration goes, Europe doubtless will become browner, but the influx of immigrants will be a very gradual process, evoke a good deal of unpleasant rhetoric, and be subject to rigid controls. Overall, far from being a beacon for the rest of the world, Europe is likely to present an altogether different face to would-be imitators. Europe is likely to seem introspective, obsessed with the minutiae of its super-complex structures of supranational governance and with sordid questions of who gets what from the budget process. While still wealthy, it may easily seem to outsiders to be economically sluggish, socially conservative, and culturally forbidding. If this gloomy portrayal of the face the EU will project to the world is right, then the belief, widespread in Brussels and echoed by Rifkin, that the European Union will one day rival the United States and counterbalance American hegemony becomes dubious. This is because the notion of the EU as a superpower ultimately rests upon its capacity to project what the Harvard theorist Joseph Nye refers to as "soft power" rather than military clout. That is to say, Europe may be able to get what it wants in world politics by virtue of setting an example that other states wish to emulate rather than by being a first-rank military power. Certainly, Europe will not be a military power of significance in the foreseeable future. Although the EU has been active since the early 1990s in extending cooperation between its member states in the defense and foreign policy fields, and although the constitution includes the provision for an EU foreign minister, member states retain veto power over foreign and military policy decisions. Moreover, the armed forces at the EU's disposal are extremely limited. There is what Rifkin calls a "mind-numbing" difference in the American and the European capacity to wage war. EU forces are mostly less well-equipped and less expertly led. For the European Union, it is a very big deal that troops under the EU flag are about to substitute for NATO forces as peacekeepers in Bosnia. Europe's opportunity to emerge as a superpower, therefore, largely relies upon its appeal as a successful experiment in supranationalism. In fairness, the EU does have an inspiring story to tell. Since 1950, proud European nations have learned to put the institutionalized process of discovering common solutions ahead of the practice of imposing national ones. But it is not obvious, despite Rifkin's paean for the EU's achievements and his confidence in its future prospects, that the EU can project itself as a model for the rest of the world. It has far too many problems of its own. The Interpretation of Dreams Rifkin's book is, quite intentionally, a counterblast to neoconservative theorists like Kagan who are convinced that the future belongs to the United States and that everybody else will have to "adjust to the American hegemon." Rifkin is saying, in substance, "Don't be too sure." Perhaps the United States will find itself superseded by a less heroic and martial, but more humane and profound, set of values; a dream, he says in his peroration, that is not just worth dying for but living for. The United States will just have to adjust to losing its status as the land of the free. The danger inherent in these exercises in pop historicism, however, is that they take on a life of their own. There is by now a widespread conviction--one that would have seemed absurd to all but a handful of campus radicals a dozen years ago--that post-Cold War Europe and the United States are inimical by nature or by calling rather than, simply and banally, in disagreement over important specific issues. The truth is that the relationship between Europe (old or new) and the United States is not ultimately a question of values. We both adhere to the liberal norms of political and religious tolerance; to greater or lesser degrees we both favor a mixed economy in which the rights of private property are tempered by state-imposed social regulation; both societies are characterized, again to greater or lesser degrees, by hedonistic personal consumption. In recent years, Americans have stirred patriotic and religious values more deeply into the mix; Europeans have poured in a greater emphasis on the environment and sustainable living. But we remain two branches of a single civilization--ask al-Qaeda. None of this is to say that Europe and the United States will not have serious disagreements over the coming decade. If the picture I have presented in this essay is right, they almost certainly will. A sluggish, slow-to-reform Europe will be accused by the United States of holding back world growth and with being partly responsible for the perennial inability of America to balance its books. Europeans may easily be tempted into a kind of default isolationism in which they immerse themselves in the complexities of running a 30-nation, $12 trillion economy and neglect the outside world--exasperating the United States in the process. Some of the newer members of the EU will create transatlantic tensions by being more pro-American than France or Germany or Spain might like. Both Europe and the United States are bound to fall out over foreign policy questions and trade issues. The important thing, however, will be to ensure that we do not suffuse these disagreements with an air of historical inevitability by defining them as a fundamental clash of political identities. Rifkin says that he believes that the "growing divide" between Europe and America is more "visceral than pragmatic." If we believe this, the divide will indeed become wider. Notes 1. Robert Kagan, Of Paradise and Power: America and Europe in the New World Order (New York: Knopf, 2003). 2. Jeremy Rifkin, The European Dream: How Europe's Vision of the Future Is Quietly Eclipsing the American Dream (New York: Jeremy P. Tarcher/Penguin, 2004) 3. Fears over Turkey's entry into the EU are emerging at a high political level. See Val?ry Giscard d'Estaing, A Better European Bridge to Turkey, Financial Times (London), November 25, 2004. * Mark Gilbert is associate professor of contemporary history at the University of Trento (Italy). In 2005, he will be a professorial lecturer at the Johns Hopkins School of Advanced International Studies, Bologna. [Go to [24]interactive discussion forum] References 24. http://www.worldpolicy.org/discus/ From checker at panix.com Wed Jul 27 20:39:48 2005 From: checker at panix.com (Premise Checker) Date: Wed, 27 Jul 2005 16:39:48 -0400 (EDT) Subject: [Paleopsych] SW: On Geography and Skin Color Message-ID: On Geography and Skin Color http://scienceweek.com/2005/sw050729-4.htm The following points are made by Jared Diamond (Nature 2005 435:283): 1) The most obvious -- and most discussed -- aspect of human geographical variability is skin color. Most people would say that skin color becomes darker towards the Equator to give more protection against tropical sunlight. But that claimed correlation of skin color with latitude is riddled with exceptions, and that functional interpretation of the correlation is debated. Most scientists shy away from the whole subject because it so interests racists, and the motives of scientists studying it become suspect. 2) Jablonski and Chaplin [1-3] have brought order to this confused field, starting with quantitative measurements of skin color and sunlight. By convincingly identifying the strongest correlate of skin color, they open the door for anthropologists to explore other correlates and exceptions. 3) Skin color was formerly described qualitatively by matching it against colored tablets, but Jablonski and Chaplin tabulate numerical values, obtained by skin reflectance spectrophotometry. And instead of using latitude as a proxy for sunlight, Jablonski and Chaplin tabulate ultraviolet radiation (UVR) itself at the Earth's surface. UVR does decrease with latitude, because at high latitudes the oblique angle at which sunlight falls on the atmosphere results in a longer atmospheric path, and hence more absorption and scattering of UVR. But the correlation of UVR with latitude is imperfect: UVR also increases with altitude owing to atmospheric thinning (for example, UVR is high on the Tibetan and Andean altiplanos); it also decreases with atmospheric water vapour in the form of rain, clouds or humidity (UVR is higher in the Atacama Desert, southwestern United States, and the Horn of Africa, than in adjacent, wetter areas to the west or east). 4) In this quantitative database, variation in UVR proves to be the strongest predictor of skin reflectance, explaining 77% (Northern Hemisphere) or 70% (Southern Hemisphere) of its variation. The causes of this correlation have been the subject of many theories, such as protection against skin cancer, protection against overproduction of vitamin D, and camouflage in tropical jungles. 5) Jablonski and Chaplin prefer a combination of two selective factors involving several costs and one benefit of UVR. The costs involve the destructive photolysis of many compounds, of which Jablonski and Chaplin attach particular importance to the B vitamin folate. Everybody requires folate, so everybody would have dark skins (to screen out UVR and reduce photolysis) if there were no other selective factors. However, UVR also provides a benefit: catalysing the synthesis of vitamin D. Hence skin color evolves as a compromise between skins light enough to permit UVR penetration for vitamin D synthesis, but dark enough to reduce folate photolysis.[4,5] References (abridged): 1. Jablonski, N. G. Annu. Rev. Anthropol. 33, 585-623 (2004) 2. Chaplin, G. Am. J. Phys. Anthropol. 125, 292-302 (2004) 3. Jablonski, N. G. & Chaplin, G. J. Hum. Evol. 39, 57-106 (2000) 4. Hawkes, K. et al. Am. J. Hum. Biol. 15, 380-400 (2003) 5. Diamond, J. Nature 410, 521 (2001) Nature http://www.nature.com/nature -------------------------------- Related Material: ANTHROPOLOGY: ON HUMANS AND RACE The following points are made by D.A. Hughes et al (Current Biology 2004 14:R367): 1) Systematists have not defined a "type specimen" for humans, in contrast to other species. Recent attempts to provide a definition for our species, so-called "anatomically modern humans", have suffered from the embarrassment that exceptions to such definitions inevitably arise -- so are these exceptional people then not "human"? Anyway, in comparison with our closest-living relatives, chimpanzees, and in light of the fossil record, the following trends have been discerned in the evolution of modern humans: increase in brain size; decrease in skeletal robusticity; decrease in size of dentition; a shift to bipedal locomotion; a longer period of childhood growth and dependency; increase in lifespan; and increase in reliance on culture and technology. 2) The traditional classification of humans as Homo sapiens, with our very own separate family (Hominidae) goes back to Carolus Linnaeus (1707-1778). Recently, the controversial suggestion has been made of lumping humans and chimpanzees together into at least the same family, if not the same genus, based on the fact that they are 98-99% identical at the nucleotide sequence level. DNA sequence similarity is not the only basis for classification, however: it has also been proposed that, in a classification based on cognitive/mental abilities, humans would merit their own separate kingdom, the Psychozoa (which does have a nice ring to it). 3) As for sub-categories, or "races", of humans, in his Systema Naturae of 1758 Linnaeus recognized four principal geographic varieties or subspecies of humans: Americanus, Europaeus, Asiaticus, and Afer (Africans). He defined two other categories: Monstrosus, mostly hairy men with tails and other fanciful creatures, but also including some existing groups such as Patagonians; and Ferus, or "wild boys", thought to be raised by animals, but actually retarded or mentally ill children that had been abandoned by their parents. In his scheme of 1795, Johann Blumenbach (1752-1840) added a fifth category, Malay, including Polynesians, Melanesians and Australians. 4) Blumenbach is also responsible for using the term "Caucasian" to refer in general to Europeans, which he chose on the basis of physical appearance. He thought Europeans had the greatest physical beauty of all humans -- not surprising, as he was of course European himself -- and amongst Europeans he thought those from around Mount Caucasus the most beautiful. Hence, he named the "most beautiful race" of people after their supposedly most beautiful variety -- a good reason to avoid using the term "Caucasian" to refer to people of generic European origin (another is to avoid confusion with the specific meaning of "Caucasian", namely people from the Caucasus). 5) The extent to which racial classifications of humans reflect any underlying biological reality is highly controversial; proponents of racial classification schemes have been unable to agree on the number of races (proposals range from 3 to more than 100), let alone how specific populations should be classified, which would seem to greatly undermine the utility of any such racial classification. Moreover, the apparent goal of investigating human biological diversity is to ask how such diversity is patterned and how it came to be the way that it is, rather than how to classify populations into discrete "races".(1-4) References: 1. Nature Encyclopedia of the Human Genome. (2003). Cooper, D. ed. (Nature Publishing Group), 2. Fowler, C.W. and Hobbs, L. (2003). Is humanity sustainable?. Proc. R. Soc. Lond. B. Biol. Sci. 270, 2579-2583 3. Encyclopedia of Human Evolution and Prehistory. (1988). Tattersall, I., Delson, E., and Van Couvering, J. eds. (Garland Publishing) 4. World Health Organization Website http://www.who.int Current Biology http://www.current-biology.com From checker at panix.com Wed Jul 27 20:49:50 2005 From: checker at panix.com (Premise Checker) Date: Wed, 27 Jul 2005 16:49:50 -0400 (EDT) Subject: [Paleopsych] SW: On Child Sexual Abuse Message-ID: Psychology: On Child Sexual Abuse http://scienceweek.com/2005/sw050722-4.htm The following points are made by J.J. Freyd et al (Science 2005 308:5721): 1) Child sexual abuse (CSA) involving sexual contact between an adult (usually male) and a child has been reported by 20% of women and 5 to 10% of men worldwide [1-3]. Surveys likely underestimate prevalence because of underreporting and memory failure [4,5]. Although official reports have declined somewhat in the United States over the past decade, close to 90% of sexual abuse cases are never reported to the authorities. 2) CSA is associated with serious mental and physical health problems, substance abuse, victimization, and criminality in adulthood. Mental health problems include posttraumatic stress disorder, depression, and suicide. CSA may interfere with attachment, emotional regulation, and major stress response systems. CSA has been used as a weapon of war and genocide and is associated with abduction and human trafficking [2]. 3) Much of the research on CSA has been plagued by nonrepresentative sampling, deficient controls, and limited statistical power. Moreover, CSA is associated with other forms of victimization, which complicates causal analysis of its role in adult functioning. However, associations in larger scale community and well-patient samples have been confirmed after controlling for family dysfunction and other risk factors, in longitudinal investigations that measure pre- and post-CSA functioning, and in twin studies that control for environmental and genetic factors. 4) Most CSA is committed by family members and individuals close to the child [1], which increases the likelihood of delayed disclosure, unsupportive reactions by caregivers and lack of intervention, and possible memory failure. These factors all undermine the credibility of abuse reports, yet there is evidence that when adults recall abuse, memory veracity is not correlated with memory persistence. Research on child witness reliability has focused on highly publicized allegations of abuse by preschool operators and has emphasized false allegations rather than false denials. Cognitive and neurological mechanisms that may underlie the forgetting of abuse have been identified. 5) Scientific research on CSA is distributed across numerous disciplines, which results in fragmented knowledge that is often infused with unstated value judgments. Consequently, policy-makers have difficulty using available scientific knowledge, and gaps in the knowledge base are not well articulated. References (abridged): 1. D. Finkelhor, Future Child. 4, 31 (1994) 2. World Health Organization (WHO), World Report on Violence and Health (WHO, Geneva, 2002) 3. R. M. Bolen, M. Scannapieco, Soc. Serv. Rev. 73, 281 (1999) 4. D. M. Fergusson, L. J. Horwood, L. J. Woodward, Psychol. Med. 30, 529 (2000) 5. J. Hardt, J. Child Psychol. Psychiatry 45, 260 (2004) Science http://www.sciencemag.org From checker at panix.com Wed Jul 27 20:50:04 2005 From: checker at panix.com (Premise Checker) Date: Wed, 27 Jul 2005 16:50:04 -0400 (EDT) Subject: [Paleopsych] NYT: Outcomes: In Gauging Twins' Health, Follow the Money Message-ID: Outcomes: In Gauging Twins' Health, Follow the Money http://www.nytimes.com/2005/07/26/health/26outc.html?pagewanted=print By NICHOLAS BAKALAR Female identical twins, even when raised together, differ significantly in health status depending on the economic class they attain as adults, according to research published yesterday in Public Library of Science Medicine. In a survey of 308 twin pairs, working-class women had higher blood pressure and higher cholesterol than their professional twins, and their estimates of their own health were consistently lower than those of their more affluent sisters. Their education, however, had little effect. The results applied to both identical and fraternal twins, although there was greater variation in health among fraternal twins. The findings demonstrate the negative health impact of low socioeconomic status in adulthood, said Nancy Krieger, the lead author of the study and a professor of society, human development and health at Harvard. "However much genetic endowment may matter," Dr. Krieger said, these results show "that lifetime experiences and exposures tied to social class, and not just genes, determine adult health status." The authors acknowledge that their research involved only a small number of pairs who were working class as adults, and that their research recorded no data on birth order, birth weight or details of occupational class and income over time. All of those factors, they said, could affect the interpretation of their results. Still, they assert that their findings add further proof to a large body of research showing that chronic exposure to the stresses of low economic status can harm health, independent of genetic susceptibilities to illness. From checker at panix.com Wed Jul 27 20:50:11 2005 From: checker at panix.com (Premise Checker) Date: Wed, 27 Jul 2005 16:50:11 -0400 (EDT) Subject: [Paleopsych] NYT: Police Debate if London Plotters Were Suicide Bombers, or Dupes Message-ID: Police Debate if London Plotters Were Suicide Bombers, or Dupes http://www.nytimes.com/2005/07/27/international/europe/27suicide.html [This is very interesting and is on the front page of today's Times. I checked the Washington Post, the Washington Times, Usatoday, the Wall Street Journal, and the Christian Science Monitor. None had an equivalent article. My hypothesis is that this article below is original reporting. I wonder if the ideas will be followed up.] By [3]ELAINE SCIOLINO and DON VAN NATTA Jr. LONDON, July 26 - Within hours of the July 7 attacks here, many British police and intelligence officials assumed that the four bombers had intended to die with their bombs. But in recent days, some police officials are increasingly considering the possibility that the men did not plan to commit suicide and were duped into dying. Investigators raising doubts about the suicide assumption have cited evidence to support this theory. Each of the four men who died in the July 7 attacks purchased round-trip railway tickets from Luton to London. Germaine Lindsay's rented car left in Luton had a seven-day parking sticker on the dashboard. A large quantity of explosives were stored in the trunk of that car, perhaps for another attack. Another bomber had just spent a large sum to repair his car. The men carried driver's licenses and other ID cards with them to their deaths, unusual for suicide bombers. In addition, none left behind a note, videotape or Internet trail as suicide bombers have done in the past. And the bombers' families were baffled by what seemed to be their decisions to kill themselves. While some of these clues could be seen as the work of men intent on covering their trail, some investigators increasingly believe that the men may have been conned into carrying the bombs onto the trains and leaving them, thinking they were going to explode minutes later. There remains some evidence suggesting that these were suicide bombers, beyond the fact that all died in the blasts. Their bodies, all of which were recovered, were positioned in a way that led investigators to make a preliminary determination that these may have been suicide attacks. One of the remaining mysteries that neither camp can explain away is that the attacker on the bus died 57 minutes after the blasts on the trains; witnesses saw him putting his hand in the backpack. The bus bomber could support either theory. To further complicate the matter, there are conflicting witness accounts of the behavior of the July 21 attackers. Some fled after the bombs failed to explode; at least one, on the bus, was said to have left the scene before the failed detonation. The suicide question has major implications not only for the investigation, but also for the assessment of the terrorist threat that London faces. If the attacks were a suicide mission, they would be the first suicide bombings on European soil, and signal a dangerous new threat. Suicide could indicate a higher level of commitment and point to the existence within Britain of extremists willing to die for a cause. If the men were not suicide bombers, some of the most basic assumptions of the investigation would change. On one level, the idea makes the plot less ominous. It is much easier to recruit "mules" who will carry and deposit explosives than people who are prepared to die. Several senior officials say a lively debate is under way within the investigation and wider intelligence circles. Some say the initial hypothesis that the July 7 attacks were carried out by determined fanatics willing to die in the name of a radical interpretation of Islam may have been too simplistic. "What appeared to be straightforward linear thinking last week doesn't appear to be so today," said one foreign corporate head and former senior defense official with access to police information. "There was the strong feeling after Attack One that these kids must have really been brainwashed to become suicide bombers. Then the botched Attack Two happens, and the question now is whether these were dedicated guys ready to die or stupid guys run by a smart group of people pulling the strings." The notion makes it more likely that there is an unknown mastermind who might have organized both attacks, and could still be organizing others. The British police have been reluctant to publicly declare the July 7 bombings a suicide mission. Britain's top police officers - Sir Ian Blair, the Metropolitan Police commissioner, and Peter Clarke, the head of Scotland Yard's antiterrorist branch - have steadfastly refused to call the men "suicide bombers" in public. "Technically they're not suicide bombers," said one police officer familiar with the investigation. "Scotland Yard has not said they are. Even if we may think they probably were suicide bombers, the police have not said this outright." A senior official of a European intelligence agency said: "The British from the beginning have had some doubts about the suicide hypothesis and cannot say exactly whether it is true. Our own analysis is that we can say that it is not absolutely necessary that this was a suicide mission." The botched attacks of July 21 have made the debate more urgent. The July 21 team's lack of sophistication made some investigators reassess the July 7 bombing team's organization skills. Several investigators said the July 7 bombers, ranging in age from 18 to 30, might not have been sophisticated enough to plan a synchronized attack, with three bombs exploding in the London Underground within 45 seconds. "I just have a hard time fathoming kids that young being that sophisticated," one senior intelligence official said. Another theory, several intelligence and counterterrorism officials said, is that the men knew there were timers on the bombs, and were instructed to leave the explosives on the trains at a designated time, perhaps 9 a.m. "It is possible that they were told the bombs would blow up at 9:10 a.m. or 9:15 a.m., and they were to stay with them until 9 a.m.," another official said. The bombs went off at 8:50 a.m. American investigators are convinced that several of the Sept. 11 hijackers, the so-called muscle who were recruited near the end of the operation, might not have been told that the four hijacked airplanes were intended to be used for suicide missions. In a news conference the day after the first attacks, Sir Ian, the commissioner of the Metropolitan Police, said: "There is nothing to suggest that there was a suicide bomber involved in this process. On the other hand, nothing can be ruled out." Essentially, that view has not changed since then. In his monthly news conference on Tuesday, Prime Minister Tony Blair referred to the suicide issue, but as a sweeping hypothetical premise. "There is no justification for suicide bombing whether in Palestine, in Iraq, in London, in Egypt, in Turkey, anywhere," he said at one point. At another point, he said, "Suicide bombing is wrong, whether it is in Israel, or London or New York." A spokesperson at Downing Street said Mr. Blair's remarks were intended to be general comments about suicide bombings, not a confirmation that the police now believe that the July 7 attacks were indeed suicide bombings. "I think he was speaking generally," the spokesperson said. "He has always said he would leave operational questions to the police to answer. I think this was a situation where he was asked a question and he was speaking quite generally about the subject." The view that the four bombers might have been duped into carrying out their suicide missions is one that is shared by family members of some of the men, who have said in interviews that they refuse to believe that they signed on to carry out a suicide mission. Shehzad Tanweer's uncle, Bashir Ahmed, 65, said his family had no idea that the 22-year-old Leeds man who loved cricket and soccer was planning a suicide attack. "It must have been forces behind him," Mr. Ahmed said. The family of Germaine Lindsay, 19, also said they were stunned. His wife, Samantha Lewthwaite, 22, said her late husband was "a loving husband and father" who had shown "absolutely no sign of doing this atrocious crime." Ms. Lewthwaite added, "We are still in shock about the news we have been given and are trying to understand why anyone, never mind Germaine, would do such a thing." Mark Baillie, the terror and defense expert at the Center for Defense and International Security Studies, said the debate about whether the July 7 bombers intended to die "is something that everybody is beginning to talk about." "There are several anomalies that lead you to think that they were not suicide bombers," Mr. Baillie said. "It would have been very interesting if they were tricked." Jonathan Allen and H?l?ne Fouquet contributed reporting for this article. From anonymous_animus at yahoo.com Thu Jul 28 00:55:31 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Wed, 27 Jul 2005 17:55:31 -0700 (PDT) Subject: [Paleopsych] changing teen behavior In-Reply-To: <200507271800.j6RI0ZR28299@tick.javien.com> Message-ID: <20050728005531.51458.qmail@web30804.mail.mud.yahoo.com> >>Does anybody really know how to change a teenager's behavior?<< --Only those who understand and relate well to teenagers. Adults in our culture tend to be heavily indoctrinated with "adult-speak" which makes it difficult for them to authentically relate to kids. Anyone who is still a kid at heart will find it easier, especially if he or she went through similar experiences as a kid. Grandparents tend to be good with kids if they're proactive and involved. Obviously looking at a kid as a problem and trying to manipulate him into doing what you believe he should be doing isn't going to work. Nobody likes to be manipulated, at any age. Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From waluk at earthlink.net Thu Jul 28 03:08:16 2005 From: waluk at earthlink.net (Gerry) Date: Wed, 27 Jul 2005 20:08:16 -0700 Subject: [Paleopsych] changing teen behavior In-Reply-To: <20050728005531.51458.qmail@web30804.mail.mud.yahoo.com> References: <20050728005531.51458.qmail@web30804.mail.mud.yahoo.com> Message-ID: <42E84C20.6090906@earthlink.net> >>Obviously looking at a kid as a problem and trying to manipulate him into doing what you believe he should be doing isn't going to work. Nobody likes to be manipulated, at any age.>> True. For those who have a bit of extra funds (or supportive grandparents) my answer is live-away private schooling. There is nothing like a live-in group of peers following procedure established by those in charge. For those who refuse to comply....simple....they are given the good ole fashion boot. Gerry Reinhart-Waller From checker at panix.com Thu Jul 28 21:44:20 2005 From: checker at panix.com (Premise Checker) Date: Thu, 28 Jul 2005 17:44:20 -0400 (EDT) Subject: [Paleopsych] BBC: Ethiopian Man With 77 Children Lectures on Birth Control Message-ID: Ethiopian Man With 77 Children Lectures on Birth Control Polygamy no fun, admits Ethiopian By Mohammed Adow BBC, Ethiopia http://news.bbc.co.uk/go/pr/fr/-/2/hi/africa/4720457.stm [Thanks to Laird for this.] An Ethiopian man with 11 wives and 77 children is urging people not to follow his example and is giving advice on family planning and contraception. After seeing his fortune disappear under the competing demands of his enormous family, Ayattu Nure, 56, even urges people not to get married. "I want my children to be farmers but I have no land, I want them to go to school but I have no money," he says. But his eldest son has not heeded Mr Ayattu's advice and he has three wives. Share wealth Seven of Mr Ayattu's wives live in huts around his compound, which are in urgent need of renovation. Another four live in huts on the other side of the valley in Giwe Abossa village, 300km from the capital, Addis Ababa in Arsi region. He says he cannot remember all his children's names but tries to work out who they are from their mothers and which huts they live in. Mr Ayattu says he used to be rich and wanted to share his wealth around, which is why he took so many wives. But now he struggles to feed them all. "I feel like killing myself when I see my hungry children whom I cannot help," Mr Ayattu says. His wives have given birth to more than 100 children but 23 have died. School photos However, he blames Ethiopia's government for not doing more to help him look after all his children. "I know I have done wrong by marrying many wives and begetting many children but I think I deserve help from the government." But his biggest complaint at the moment is with the authorities of the local school which 40 of his children now attend. They want photographs for each of his children's files, which will further deplete his meagre resources. He says that he tries to share his time evenly between his wives and children, adding that although quarrels and squabbles are common, they try to solve their problems amicably. "People see me as a funny man, but there is no fun in my condition. I am a desperate man struggling to survive," he says. Although Mr Ayattu's eldest son, Dagne Ayattu, does not have a job, at the age of 33, he has seven children and is about to marry his fourth wife. But he says he will not have as many children or wives as his father. From checker at panix.com Thu Jul 28 21:44:28 2005 From: checker at panix.com (Premise Checker) Date: Thu, 28 Jul 2005 17:44:28 -0400 (EDT) Subject: [Paleopsych] Steve Sailer: Jared Diamond: the New King of All Media Message-ID: Steve Sailer: Jared Diamond: the New King of All Media http://vdare.com/sailer/050724_diamond.htm [14]Steve Sailer Archive July 24, 2005 Jared Diamond: the New [17]King of All Media By [18]Steve Sailer [19]Stephen Jay Gould's death in 2002 opened up the position of our most celebrated scientist-seer, a post currently filled in Britain by [20]Richard Dawkins. The job requirements seem to include starting out as a specialist in one of the life sciences and then developing a taste for generalizing about humanity. Among the contenders: Gould's old rival, Edward O. Wilson (author of [21]Sociobiology and [22]Consilience) and the younger [23]Steven Pinker ([24]How the Mind Works and [25]The Blank Slate). In 2005, UCLA geographer and physiologist [26]Jared Diamond has made his bid, becoming omnipresent in the media with bullet An environmentalist bestseller, [27]Collapse: How Societies Choose to Fail or Succeed. bullet An [28]exhibit at the LA Natural History Museum based on Collapse. bullet A three part [29]documentary currently showing on PBS based on his 1997 Pulitzer Prize winner [30]Guns, Germs, and Steel: The Fates of Human Societies, which stars Diamond and his [31]Captain Ahab beard. (Note that Diamond is not shy about giving his books ambitious subtitles!) Before Diamond began writing for a popular audience, around his 50th birthday in 1987, he was a professor at UCLA's medical school and a leading birdwatcher in New Guinea. His early magazine articles in Discover and Natural History were collected in his initial and, to my mind, best book, [32]The Third Chimpanzee: The Evolution and Future of the Human Animal. His subsequent big books, [33]Guns, Germs, and Steel and Collapse, were both sketched out in tour de force chapters in The Third Chimpanzee. The power-to-weight ratio of Diamond's writing didn't improve when he expanded them into doorstop books. As a prose stylist, Diamond, while perfectly adequate, isn't quite in the same class as Gould, Dawkins, Wilson, or Pinker, and his long books can be a tough slog. Third Chimpanzee was also distinguished by a fair degree of courage. Diamond tackled politically incorrect questions like: Why did most of the big mammals that lived in North America at the time the Indians arrived--such as wooly mammoths, camels, and horses--go extinct so quickly after the first Indians arrived across the Bering Strait? Diamond's answer: the Indians ate them. In fact, back in 1986 Diamond published a study in Nature that is so unfit for polite society that it would probably get him lynched by his current admirers if they ever heard of it: "[34]Ethnic Differences: Variations in Human Testis Size." Personally, I don't have a lot of first-hand experience, so I couldn't give you my opinion on the validity of Diamond's findings on racial differences in testicle size. But Diamond seemed pretty fascinated by the subject. Unfortunately, the market for the uncomfortable truths is a lot smaller than the market for what people want to hear. So after his initial book, Diamond remained a cult figure. But Diamond has certainly solved that problem. He turned to the topic of race, offering impressive-sounding rationalizations for what intellectuals wanted to believe anyway. Diamond helped launch the [35]Race Does Not Exist fad with his November, 1994 Discover article "[36]Race Without Color." In this, he suggested that we could define races on any physical characteristic we chose. Norwegians and Nigerian Fulanis could belong to the Lactose Tolerant race and Japanese and Nigerian [37]Ibos belong to the Lactose Intolerant race. The reason that defining Fulanis and Ibo as belonging to separate races is obviously ridiculous is because the most useful definition of race is not built on any particular trait. Instead, it's built on ancestry. We all intuitively know that Fulanis and Ibos are more racially similar to each other because they have more recent ancestors in common with each other than they do with [38]Norwegians or [39]Japanese. Race starts with boy meets girl, followed by baby. That line of thought suggests that the most useful definition of a racial group is "[40]a partly inbred extended family," as I pointed out a few years later in [41]response to Diamond. But, when it comes to race, obfuscation pays a lot better than illumination. Diamond turned himself into Jared Diamond, Superstar! with his 1997 bestseller [42]Guns, Germs, and Steel. This book purported to Disprove Racism, which he defined tendentiously as merely believing that genetic differences in human capabilities along racial lines exist. This book certainly made him a fixture as a [43]speaker at the tonier sort of conference. (For instance, I saw Diamond at legendary financier [44]Michael Milken's annual confab.) Diamond's goal in his book was to explain why Eurasians conquered Africans, Australians, and Americans instead of the other way around. Conventional social scientists shy away from such a fundamental question out of fear of what they might find. And Diamond duly [45]proclaimed genetic explanations [46]"racist" and "loathsome." He set out to reaffirm the equality of humanity by showing the radical inequality of the continents. To him, the three most important engines of history were location, location, and location. Diamond [47]asked: "Why didn't rhino-mounted Bantu warriors swarm north to decimate horse-mounted Romans and create an empire that spanned Africa and Europe?" His answer: rhinos and other African animals are impossible to domesticate, unlike Eurasian beasts such as horses and cattle. Guns, Germs, and Steel contained a lot of useful information and reasonable speculation. But a little thought raises serious questions: bullet Not all sub-Saharan Africans lack domestic animals. For instance, the [48]Fulanis are mostly [49]lactose tolerant precisely because they evolved an ability to drink cow's milk as adults because they herd cattle on a massive scale. bullet It's true that Africans never [50]domesticated the ostrich, but a [51]Mr. Hardy pulled off the trick in the 19th Century. In the late nineteenth century, South African farmers raised almost a million of these 300-pound birds to supply the fancy hat industry with feathers. bullet Most strikingly, Diamond failed to recall that elephant-mounted African warriors did swarm north to decimate horse-mounted Romans and almost create an empire that spanned Africa and Europe in perhaps the most famous feat of ancient warfare: Hannibal crossing the Alps. (Although a biopic with [52]Denzel Washington as Hannibal has long been under development in Hollywood, the North African Carthaginians were actually the Semitic descendents of the Levantine Phoenicians.) But those are quibbles compared to the central contradiction in Guns, Germs, and Steel: Diamond makes environmental differences between the continents seem so compelling that it's hard to believe that humans would not become somewhat genetically adapted to their homelands through natural selection. Most of his readers must have assumed that natural selection can't work fast enough to diversify humans. But Diamond knows that's not true, as his lactose tolerance illustration demonstrated. This mutation didn't begin to spread until people started milking animals sometime in the last 13,000 years. However, by now [53]98 percent of Swedes are lactose tolerant as adults versus two percent of Thais. This example of human biodiversity is hardly trivial: evolving the ability to digest milk has had a sizable [54]economic and cultural impact on, say, the Swiss. Self-defeatingly, Diamond began Guns, Germs, and Steel by making a eugenic argument that New Guineans are smarter than whites because "natural selection promoting genes for intelligence has probably been far more ruthless in New Guinea than in more densely populated, politically complex societies..." Of course, the reality is actually that while New Guineans are, on average, no doubt better at Stone Age life than you or I would be, people whose ancestors have survived for many generations in "densely populated, politically complex societies" tend to be better at functioning in the modern world. As far as I can tell, Diamond only lectures, never debates. I've never heard of him ever allowing himself to be dragged into a public discussion with a well-informed opponent. I talked to Diamond once after he gave a speech. We were chatting nicely until I asked him a tough question along the lines outlined above: Wouldn't different [55]agricultural environments select for different hereditary traits in different locales? I mentioned how James Q. Wilson's [56]The Marriage Problem has a couple of chapters on how tropical agriculture in [57]West Africa affects family structures. Since women can raise enough food to feed their kids, men don't invest as much in their individual children. So wouldn't the kind of man with the most surviving children be different in a tropical agricultural environment, where he doesn't need to work too much to support them, than in a temperate agricultural environment, where he does? Now, Diamond has spent a lot of time birdwatching in New Guinea, which is similar to Africa. So he knows all about what tropical agriculture selects for. But he had no intention of touching that tar-baby with a ten-foot pole. To get away from me and my question, he grabbed his papers and literally dog-trotted at about 5 mph out of the auditorium! Diamond can run, but he won't be able to hide from the facts forever. I hear there are now several scientific papers in the publication pipeline about racial differences in genes that affect cognition and personality, each comparable in importance to the recent blockbuster paper on the genetic roots of [58]Ashkenazi Jewish IQ, Diamond's latest bestseller, [59]Collapse, is about "[60]ecocide" or unintentional ecological suicide, due to environmental disasters such as [61]deforestation. Ecological concerns are pooh-poohed by many free-market ideologues, but [62]environmental problems, which economists call "externalities," are indeed inherent in any economic system. And Diamond supplies a lot of useful, if overstated, information. But "ecocide," while significant, is less important than Diamond implies. That's why he spends so much time on trivial edge-of-the-world doomed cultures, like the Vikings in Greenland and the [63]Polynesians on Easter Island, rather than on more important collapses such as the decline and fall of the [64]Roman Empire. Generally, homicide, not suicide, is the main cause of collapse. Societies get [65]invaded and [66]overwhelmed. Diamond cites the disappearance of the Maya--but what about the Aztecs and the Incas, still going strong when the Spanish arrived? He points to the [67]Anasazi Indians--but there were also the Cherokee, the Sioux, and countless others. He notes the Easter Islanders--but I counter with the Maoris, the Tasmanians, the Australian Aborigines, the Chatham Islanders (exterminated by the Maori), and so forth. He cites the Vikings in [68]Greenland--but how about the Saxons in Britain and the Arabs in Sicily, both conquered by descendents of the Vikings? Still, Collapse can be valuable, especially if you look for the parts where Diamond shows more courage than is normal for him these days. A close reading demonstrates that Diamond is quite unenthusiastic about mass immigration. For instance, in his chapter about the ecological fragility of Australia, he relays this optimistic hope for better policy in the future: "Contrary to their government and business leaders, 70 percent of Australians say they want less rather than more immigration." Diamond also points out that the quality of immigrants matters. In an interesting chapter comparing the two countries that share the island of Hispaniola, the mediocre but livable [69]Dominican Republic and dreadful [70]Haiti, he notes that one reason the Dominican Republic is now both more prosperous and less deforested and eroded than tragic Haiti is the difference in their people: "... the Dominican Republic, with its Spanish-speaking population of predominantly European ancestry, was both more receptive and more attractive to European immigrants and investors than was Haiti with its Creole-speaking population composed overwhelmingly of black former slaves." Ironically, when I left the "Collapse" exhibit, with its warnings about overpopulation, at Los Angeles's Natural History museum, I turned out of the parking lot onto [71]Martin Luther King Boulevard, where the [72]billboards were in Spanish. In LA, the [73]African Americans have been pushed off even MLK Blvd. by Latin American immigrants. Diamond writes: "I have seen how Southern California has changed over the last 39 years, mostly in ways that make it less appealing... The complaints voiced by virtually everybody in Los Angeles are those directly related to our growing and already high population... While there are optimists who explain in the abstract why increased population will be good and how the world can accommodate it, I have never met an Angeleno ... who personally expressed a desire for increased population in the area where he or she personally lived... California's population growth is accelerating, due almost entirely to immigration and to the large average family sizes of the immigrants after their arrival." Unfortunately, Diamond's bravery then breaks down again. Rather than call for doing something about immigration, such as enforcement of the laws against illegal immigration, he merely laments, "The border between California and Mexico is long and impossible to patrol effectively ..." No, it's not. Israel, with two percent of America's population, is successfully fencing off its West Bank border, which is ten percent as long. In another important section, Diamond illustrates how ethnic diversity makes environmental cooperation more difficult. He praises the Dutch as the most cooperative nation on earth and attributes their awareness of and willingness to tackle problems to their shared memory of the 1953 flood that drowned 2,000 Netherlanders living below sea level. (Unfortunately, he doesn't mention whether Holland's rapidly growing immigrant Muslim population remembers when the dikes failed 52 years ago.) Diamond notes that there are three possible solutions to what Garrett Hardin called "[74]the tragedy of the commons," or the tendency for individuals to over-consume resources and under-invest in responsibilities held in common, leading to ecological collapse. bullet Government diktat. bullet Privatization and property rights -- but that's impractical with some resources, such as fish. bullet "The remaining solution to the tragedy of the commons is for the consumers to recognize their common interests and to design, obey, and enforce prudent harvesting quotas themselves. That is likely to happen only if a whole series of conditions is met: the consumers form a homogenous group; they have learned to trust and communicate with each other; they expect to share a common future and to pass on the resource to their heirs; they are capable of and permitted to organize and police themselves; and the boundaries of the resource and of its pool of consumers are well defined." (My emphasis) Wow! (A classic supporting case that that Diamond doesn't bring up: American shrimp fishermen in Texas were universally denounced as racists in the late 1970s when they resisted the government's efforts to encourage Vietnamese refugees to become shrimpers in their waters. French director Louis Malle made a movie, Alamo Bay, [75]denouncing ugly Americans fighting [76]hardworking immigrants. (What got lost in all the tsk-tsking is that fishing communities always resist newcomers, especially hardworking ones, because of the sizable chance that the outsiders who don't know the local rules or don't care about them will ruin the ecological balance and wipe out the stocks of fish--all things for which Vietnamese fishermen are now notorious). The evidence Diamond assembles indicates, although of course he never dares to state it bluntly, that the fundamental requirement for dealing effectively with environmental danger is: start with a population that's limited in number, cohesive, educated, and affluent. Needless to say, mass immigration from the Third World works against all those characteristics. My conclusion: keep in mind while reading Diamond's bestsellers that, after a promising start, he mostly sold out to political correctness. Then you can salvage something from his books. It's not edifying behavior from a [77]tenured professor--but in the current climate, we have to take what we can get. [Steve Sailer [[78]email him], is founder of the Human Biodiversity Institute and [79]movie critic for [80]The American Conservative. His website [81]www.iSteve.com features site-exclusive commentaries.] References 14. http://vdare.com/sailer/index.htm 17. http://www.fortunecity.com/bennyhills/palin/250/howlinks.htm 18. http://vdare.com/sailer/index.htm 19. http://www.isteve.com/2002_Stephen_Jay_Gould_RIP.htm 20. http://www.vdare.com/sailer/dawkins.htm 21. http://www.isteve.com/Sociobiology.htm 22. http://www.amazon.com/exec/obidos/tg/detail/-/0679450777/vdare 23. http://www.isteve.com/2002_QA_Steven_Pinker.htm 24. http://www.amazon.com/exec/obidos/tg/detail/-/0393318486/vdare 25. http://www.vdare.com/sailer/pinker_progress.htm 26. http://149.142.237.180/faculty/diamond.htm 27. http://www.amazon.com/exec/obidos/ASIN/0670033375/vdare 28. http://www.nhm.org/exhibitions/collapse/index.html 29. http://www.pbs.org/gunsgermssteel/index.html 30. http://www.amazon.com/exec/obidos/tg/detail/-/0393317552/vdare 31. http://www.edge.org/3rd_culture/diamond/diamond_p1.html 32. http://www.amazon.com/exec/obidos/tg/detail/-/0060984031/vdare 33. http://www.isteve.com/diamond.htm 34. http://www.gnxp.com/MT2/archives/003206.html 35. http://www.vdare.com/sailer/cavalli-sforza_ii.htm 36. http://www.antiracistaction.ca/race3.html 37. http://www.chiro.cc/health_database/milk.shtml 38. http://www.vdare.com/letters/tl_050404.htm 39. http://www.vdare.com/sailer/japanese_robots.htm 40. http://www.vdare.com/sailer/presentation.htm 41. http://www.isteve.com/makingsense.htm 42. http://www.isteve.com/diamond.htm 43. http://www.thelavinagency.com/college/jareddiamond.html 44. http://www.mikemilken.com/photos.taf?photo=148 45. http://www.nybooks.com/articles/1132 46. http://www.lrainc.com/swtaboo/stalkers/jpr_ggs.html 47. http://www.edge.org/3rd_culture/diamond_pulitzer/diamond_index.html 48. http://www.uiowa.edu/~africart/toc/people/Fulani.html 49. http://www.scienceinafrica.co.za/2002/june/lactose.htm 50. http://www.ostrich.org.uk/ 51. http://www.ostrich.org.uk/farming.html 52. http://www.isteve.com/2002_Denzel_Washington_as_Afrocentrist_Hannibal.htm 53. http://wsrv.clas.virginia.edu/~rjh9u/lacdata.html 54. http://www.info-galaxy.com/Chocolate/History/Swiss_Pioneers/swiss_pioneers.html 55. http://www.vdare.com/sailer/050515_redneck.htm 56. http://www.amazon.com/exec/obidos/tg/detail/-/006093526X/vdare 57. http://www.vdare.com/sailer/randall_kennedy.htm 58. http://www.vdare.com/sailer/050605_iq.htm 59. http://www.amazon.com/exec/obidos/tg/detail/-/0670033375/vdare 60. http://www.npg.org/footnote/collapse.html 61. http://www.vdare.com/sailer/conservation_program.htm 62. http://www.vdare.com/pb/burke_greens.htm 63. http://www.greatchange.org/footnotes-overshoot-easter_island.html 64. http://www.vdare.com/williamson/city_of_pilgrims.htm 65. http://www.vdare.com/pb/malkin.htm 66. http://www.vdare.com/francis/witch_hunters.htm 67. http://raysweb.net/canyonlands/pages/drought.html 68. http://vdare.com/misc/yeagley_columbus_day.htm 69. http://www.vdare.com/rubenstein/winner.htm 70. http://vdare.com/misc/rushton_african_iq.htm 71. http://www.vdare.com/sailer/mlk_street.htm 72. http://blog.vdare.com/archives/2005/05/02/that-la-mx-billboard-the-rest-of-the-story/ 73. http://www.vdare.com/guzzardi/look_the_other_way.htm 74. http://dieoff.org/page95.htm 75. http://movies2.nytimes.com/gst/movies/movie.html?v_id=1352 76. http://www.oldstatehouse.com/educational_programs/classroom/arkansas_news/detail.asp?id=113&issue_id=3&page=7 77. http://www.vdare.com/pb/purpose_of_tenure.htm 78. mailto:steveslr at aol.com 79. http://groups.yahoo.com/group/iSteve-movies/ 80. http://www.amconmag.com/ 81. http://www.isteve.com/ From checker at panix.com Thu Jul 28 21:44:36 2005 From: checker at panix.com (Premise Checker) Date: Thu, 28 Jul 2005 17:44:36 -0400 (EDT) Subject: [Paleopsych] Legal Affairs: Douglas R. Burgess Jr.: The Dread Pirate Bin Laden Message-ID: Legal Affairs: Douglas R. Burgess Jr.: The Dread Pirate Bin Laden http://www.legalaffairs.org/issues/July-August-2005/feature_burgess_julaug05.msp July|August 2005 --------------- First, the summary from the "Magazine and Journal Reader" feature of the daily bulletin from the Chronicle of Higher Education, 5.7.27 http://chronicle.com/prm/daily/2005/07/2005072701j.htm A glance at the July/August issue of Legal Affairs: Bin Laden, the new Blackbeard? In 1856 international law recognized two entities: people and states. People followed the rule of their governments, while nations bowed to international pacts. When the Declaration of Paris was signed that year, a category was created for a third group: pirates. Almost 200 years later, says Douglas R. Burgess Jr., an author and expert on international law, the parameters of that category can be used to fight an enemy of today: terrorists. In 2005 no international law defines terrorism, says Mr. Burgess. Following older standards can work, he says, because terrorists, like pirates, lack the protection of law afforded to citizens. They also lack the sovereignty of legitimate nations. That unique status means pirates can be captured wherever by whomever. "The ongoing war against pirates," he argues, "is the only known example of state vs. nonstate conflict until the advent of the war on terror." But how can terrorism be compared to piracy? The corollaries, says Mr. Burgess, can be "profound and disturbing." Both declared war against civilization. Both have prepared their attacks in hiding, whether in a remote cove or in a secret cell. Both have aimed to bring attention to their causes. They share the goals of destruction, homicide, and frustration of commerce. Most important, he says, "both are properly considered enemies of the rest of the human race." A more effective war against terrorism could be fought, says Mr. Burgess, if these actions were taken: Terrorists would be enemies of all states if an international definition for terrorism existed. Universal jurisdiction would also keep nations from harboring terrorists as freedom fighters by distinguishing between legitimate insurgents and outright terrorists. Above all, though, nations would not balk at helping the United States if a defined crime of terrorism could be prosecuted before the International Court of Justice, in The Hague. ---------------------- How thinking of terrorists as pirates can help win the war on terror. INTERNATIONAL LAW LACKS A DEFINITION FOR TERRORISM as a crime. According to Secretary General Kofi Annan, this lack has hampered "the moral authority of the United Nations and its strength in condemning" the scourge. But attempts to provide a definition have failed because of terrorists' strangely hybrid status in the law. They are neither ordinary criminals nor recognized state actors, so there is almost no international or domestic law dealing with them. This gives an out to countries that harbor terrorists and declare them "freedom fighters." It also lets the United States flout its own constitutional safeguards by holding suspects captive indefinitely at Guant?namo Bay. The overall situation is, in a word, anarchic. This chaotic state is reflected in, and caused by, the tortuous machinations of the U.N. in defining terrorism. Over 40 years of debate have produced a plethora of conventions proscribing acts ranging from hijacking to financing terrorist organizations. But the U.N. remains deadlocked on what a terrorist is. As a result, terrorists and countries like the United States pursue one another across the globe with virtually no rules governing their actions. What is needed now is a framework for an international crime of terrorism. The framework should be incorporated into the U.N. Convention on Terrorism and should call for including the crime in domestic criminal law and perhaps the jurisdiction of the International Criminal Court. This framework must recognize the unique threat that terrorists pose to nation-states, yet not grant them the legitimacy accorded to belligerent states. It must provide the foundation for a law that criminalizes not only terrorist acts but membership in a terrorist organization. It must define methods of punishment. Coming up with such a framework would perhaps seem impossible, except that one already exists. Dusty and anachronistic, perhaps, but viable all the same. More than 2,000 years ago, Marcus Tullius Cicero defined pirates in Roman law as hostis humani generis, "enemies of the human race." From that day until now, pirates have held a unique status in the law as international criminals subject to universal jurisdiction--meaning that they may be captured wherever they are found, by any person who finds them. The ongoing war against pirates is the only known example of state vs. nonstate conflict until the advent of the war on terror, and its history is long and notable. More important, there are enormous potential benefits of applying this legal definition to contemporary terrorism. AT FIRST GLANCE, THE CORRELATION BETWEEN PIRACY AND TERRORISM seems a stretch. Yet much of the basis of this skepticism can be traced to romantic and inaccurate notions about piracy. An examination of the actual history of the crime reveals startling, even astonishing, parallels to contemporary international terrorism. Viewed in its proper historical context, piracy emerges as a clear and powerful precedent. Piracy has flourished on the high seas for as long as maritime commerce has existed between states. Yet its meaning as a crime has varied considerably. The Roman definition of hostis humani generis fell into disuse by the fifth century A.D. with the decline of the empire. But the act didn't disappear with the definition. By 912, pirates along the coasts of Western Europe who styled themselves as "sea-warriors," or Vikings, had terrorized Britain and conquered Normandy. In the early Middle Ages, with no national navies to quash them, pirates held sway over nearly every trade route in Europe. Kings like Edward I of England then began to grant "Commissions of Reprisal" to merchantmen, entitling them to attack both pirate ships and any other merchant vessel flying the same country's flag as the one flown by the pirates they had seen before. By the 16th century, piracy had emerged as an essential, though unsavory, tool of statecraft. Queen Elizabeth viewed English pirates as adjuncts to the royal navy, and regularly granted them "letters of marque" (later known as privateering, or piracy, commissions) to harass Spanish trade. It was a brilliant maneuver. The mariners who received these letters, most notably the famed explorers Francis Drake and Walter Raleigh, amassed immense fortunes for themselves and the Crown, wreaked havoc on Spanish fleets, and terrorized Spain's shoreside cities. Meanwhile, the queen could preserve the vestiges of diplomatic relations, reacting with feigned horror to revelations of the pirates' depredations. Witness, for example, the queen's disingenuous instructions saying that if Raleigh "shall at any time or times hereafter robbe or spoile by sea or by lance, or do any acte of unjust or unlawful hostilities [he shall] make full restitution, and satisfaction of all such injuries done." When Raleigh did what Elizabeth had forbidden--namely, sack and pillage the ports of then-ally Spain--Elizabeth knighted him. This precedent would be repeated time and again until the mid-19th century, as the Western powers regularly employed pirates to wage secret wars. After a series of draconian laws passed by George I of England effectively banished pirates from the Atlantic, the Mediterranean corsairs emerged as pre-eminent maritime mercenaries in the employ of any European state wishing to harass another. This situation proved disastrous. The corsairs refused to curtail their activities after each war's conclusion, and the states realized that they had created an uncontrollable force. It was this realization that led to the Declaration of Paris in 1856, signed by England, France, Spain, and most other European nations, which abolished the use of piracy for state purposes. Piracy became and remained beyond the pale of legitimate state behavior. IF THIS CHRONOLOGY SEEMS FAMILIAR, IT SHOULD. The rise and fall of state-sponsored piracy bears chilling similarity to current state-sponsored terrorism. Many nations, including Libya, Iran, Iraq, Yemen, and Afghanistan, have sponsored terrorist organizations to wage war against the United States or other Western powers. In each case, the motivations have been virtually identical to those of Elizabeth: harass the enemy, deplete its resources, terrify its citizens, frustrate its government, and remain above the fray. The United States is credited with manufacturing its own enemy by training, funding, and outfitting terrorist groups in the Middle East, Afghanistan, and Central America during the cold war. But the important lesson for us is not merely that history repeats itself. Looking at the past provides a parallel to our current dilemma but also a solution. The Declaration of Paris is, on the one hand, a recognition of shared guilt. On the other, it represents the first articulation since the Roman era of piracy as a crime in and of itself. The pirate, by this definition, exists like a malevolent satellite to the law of nations. "Considering . . . that the uncertainty of the law and of the duties in such a matter [as piracy] gives rise to differences of opinion between neutrals and belligerents which may occasion serious difficulties, and even conflicts," the declaration stated, the signing parties "have adopted the following solemn declaration: Privateering is and remains abolished." Until 1856, international law recognized only two legal entities: people and states. People were subject to the laws of their own governments; states were subject to the laws made amongst themselves. The Declaration of Paris created a third entity: people who lacked both the individual rights and protections of law for citizens and the legitimacy and sovereignty of states. This understanding of pirates as a legally distinct category of international criminals persists to the present day, and was echoed in the 1958 and 1982 U.N. Conventions on the Law of the Sea. The latter defines the crime of piracy as "any illegal acts of violence or detention, or any act of depredation, committed for private ends." This definition of piracy as private war for private ends may hold the crux of a new legal definition of international terrorists. DANIEL DEFOE, THE GREAT CHRONICLER OF PIRACY'S GOLDEN AGE in his General History of the Pyrates, described his subjects as stateless persons "at war with all the world," a definition that may connect contemporary terrorism to piracy even more than state sponsorship does. The legacy of the Elizabethan era was a diaspora of unemployed, malcontent mariners throughout the Atlantic colonies. By the late 17th century, they began to coalesce into small pirate bands, seize vessels at anchorages or on the high seas, and wage their own private wars. The myth of the romantic buccaneer, perpetuated by such diverse artists as Robert Louis Stevenson and Johnny Depp, must be set aside. The pirates of the so-called golden age, as historian Hugh Rankin described them, were "a sorry lot of human trash." Coming from the lowest tier of the English merchant navy, they struck indiscriminately in ferocious revenge against the societies that they felt had condemned them. Often these disenchanted sailors cast their piratical careers in revolutionary terms. The 18th-century English legal scholar William Blackstone defined a pirate as someone who has "reduced himself afresh to the savage state of nature by declaring war against all mankind," while another account tells of one Edward Low, common seaman, who "took a small vessel, [hoisted] a Black Flag, and declared War against all the World." Pirates gave their ships names that reflected this dark purpose: Defiance, Vengeance, New York's Revenge, and even New York Revenge's Revenge. Perhaps the most telling statement of the pirates' motives comes from a pirate named Black Sam Bellamy. To a captured merchant captain, he boasted, "I am a free prince, and have as much authority to make war on the whole world as he who has a 100 sail of ships and an army of 100,000 men in the field." This was more than bravado. Historian Marcus Rediker has suggested that it indicates a new "pirate democracy" that drew its revolutionary principles from its perceived war against civilization and cast itself as civilization's antithesis. Some pirate bands even had constitutions. The "pirate articles" that became commonplace in the early 18th century purported to lay out in legal terms both the rights and obligations that members in a pirate band enjoyed. An excerpt from articles of Captain John Phillips, drafted in 1723, even provides a sort of liability insurance for injured comrades. The corollaries between the pirates' "war against the world" and modern terrorism are profound and disturbing. With their vengeful practices, pirates were the first and perhaps only historical precedent for the terrorist cell: a group of men who bound themselves in extraterritorial enclaves, removed themselves from the protection and jurisdiction of the nation-state, and declared war against civilization. Both pirates and terrorists deliberately employ this extranationality as a means of pursuing their activities. The pirates hid in the myriad shoals and islands of the Atlantic. The terrorists hide in cells throughout the world. Both seek through their acts to bring notice to themselves and their causes. They share means as well--destruction of property, frustration of commerce, and homicide. Most important, both are properly considered enemies of the rest of the human race. WHILE THESE HISTORICAL PARALLELS MAY TITILLATE THE IMAGINATION, they only go so far. Piracy and terrorism may share similar histories, but are they the same crime under the law? How could something generally thought of as sea robbery equal the crime committed by the people who destroyed the World Trade Center? This apparent incongruity has prevented scholars from recognizing the currents that run through them both. A crime, under the domestic law of most nations, has three elements familiar to veterans of introductory classes in criminal law: mens rea, the mental state during the commission of a crime; actus reus, the actions that constitute a crime; and locus, the place where a crime occurs. If two crimes share the same mens rea, actus reus, and locus, they are, if not identical, comparable. While piracy and terrorism may not be the same crime, they share enough elements to merit joint definition under international law. First, consider the mens rea. Terror has always been an integral part of piracy, often used to achieve a psychological effect. Perhaps the greatest terrorist of all time was Edward Teach, alias Blackbeard. When boarding a prize in battle he wove sulfur fuses into his long beard and lit them, wreathing his face in green smoke and giving himself a satanic appearance. Pirates like Blackbeard understood that their trade was a highly dangerous one and that the odds were rarely on their side. If the sight of a pirate flag could strike terror into the hearts of the victim and lead to a bloodless capitulation, pirates could avoid exposing their vulnerabilities. Pirates used fear as a tactic and for its own sake as well. They often viewed their predatory activities as a means of striking blows against civilization. Terror, in this light, became a vital part of the message they wished to send. It was not uncommon for pirates to leave a single captured sailor alive to pass on the story of their depredations. Thus the mens rea of piracy--the desire to inflict death, destruction, or deprivation of property through violent acts accompanied by deliberate use of terrorism--is a close cousin to the perceived mens rea of organized terrorism. The main distinction between them is that, although pirates might use terror as a means to an end or an end in itself, terrorists necessarily employ it for the latter purpose. A similar calculus can be made for the actus reus. Piracy still refers to sea robbery, and most piratical incidents that occur today, particularly in the Malacca Straits in Southeast Asia and other pirate "hot spots," have pecuniary rather than political motives. Yet piracy also includes a great many acts that involve no actual theft at all. In 1922, in the aftermath of World War I, France, Italy, Japan, Britain, and the United States pledged in the Washington Declaration to punish "as an act of piracy" any unprovoked submarine attacks. The Spanish Civil War a decade later produced a second and even more revolutionary treaty, the Nyon Agreement of 1937. Signed by countries including Egypt, Greece, France, Britain, and the Soviet Union, it extended universal jurisdiction to any unidentified vessels or aircraft attacking merchant shipping on behalf of the Spanish insurgents, referring to such acts as "piratical." President Ronald Reagan extended this politicized definition of piracy still further during the Achille Lauro affair of 1985. Following the seizure of an Italian cruise liner by members of the Palestine Liberation Organization and the murder of one of its passengers (a wheelchair-bound American), Reagan declared the terrorists "pirates" and demanded their extradition. This melding of terrorist and piratical crimes later resulted in the creation of a U.N. convention that introduced the term "maritime terrorism" into the legal lexicon of piracy. Over time, then, the actus reus of piracy and terrorism have moved closer to one another, and overlapped in incidents like that of the Achille Lauro. Finally there is the locus. It seems axiomatic that piracy must occur at sea, but that assumption is false. Legal scholars have long recognized two secondary forms of piracy, one ancient, one quite modern. The first is termed "descent by sea." In the old days, this meant sending jolly boats ashore and sacking a town, as Captain Henry Morgan did throughout the Spanish colonies at Portobello, Maracaibo, and Panama City in the late 17th century. The second form, far more recent, is aerial piracy, commonly known as hijacking. The linkage of piracy and hijacking under the law is made explicit in numerous sources, including the Tokyo, Hague, and Montreal conventions on hijacking, the latter of which extended the definition of piratical acts to those committed "by the crew and passengers of a private ship or a private aircraft . . . against another ship or aircraft or against persons or property on board." Even in the infancy of aerial flight, jurists recognized the potential linkages between piracy in the air and piracy at sea. The Harvard Draft Convention on Piracy of 1932 stated, "The pirate of tradition attacked on or from the sea. Certainly today, however, one should not deem the possibility of similar attacks in or from the air as too slight or too remote for consideration. . . ." Suppose an airplane is hijacked en route and sent hurtling into a coastal city, causing great loss of life and destruction of property. Under both the U.N. hijacking and piracy conventions, it is certainly an act of aerial piracy. Yet it is also a descent by sea under the broadest understanding of the term. The pirates seize the vessel and use it to attack a shoreside target, descending upon their target from the air. This piratical understanding of locus lends itself to the attacks of September 11, of course, but also to many other cases. It could be extended to include any terrorist acts committed after the terrorist has landed in a foreign nation, provided that he arrives with the intention to commit them--meaning that there's great similarity in mens rea, actus reus, and locus between piracy and terrorism. TO UNDERSTAND THE POTENTIAL OF DEFINING TERRORISM as a species of piracy, consider the words of the 16th-century jurist Alberico Gentili's De jure belli: "Pirates are common enemies, and they are attacked with impunity by all, because they are without the pale of the law. They are scorners of the law of nations; hence they find no protection in that law." Gentili, and many people who came after him, recognized piracy as a threat, not merely to the state but to the idea of statehood itself. All states were equally obligated to stamp out this menace, whether or not they had been a victim of piracy. This was codified explicitly in the 1856 Declaration of Paris, and it has been reiterated as a guiding principle of piracy law ever since. Ironically, it is the very effectiveness of this criminalization that has marginalized piracy and made it seem an arcane and almost romantic offense. Pirates no longer terrorize the seas because a concerted effort among the European states in the 19th century almost eradicated them. It is just such a concerted effort that all states must now undertake against terrorists, until the crime of terrorism becomes as remote and obsolete as piracy. But we are still very far from such recognition for the present war on terror. President Bush and others persist in depicting this new form of state vs. nonstate warfare in traditional terms, as with the president's declaration of June 2, 2004, that "like the Second World War, our present conflict began with a ruthless surprise attack on the United States." He went on: "We will not forget that treachery and we will accept nothing less than victory over the enemy." What constitutes ultimate victory against an enemy that lacks territorial boundaries and governmental structures, in a war without fields of battle or codes of conduct? We can't capture the enemy's capital and hoist our flag in triumph. The possibility of perpetual embattlement looms before us. If the war on terror becomes akin to war against the pirates, however, the situation would change. First, the crime of terrorism would be defined and proscribed internationally, and terrorists would be properly understood as enemies of all states. This legal status carries significant advantages, chief among them the possibility of universal jurisdiction. Terrorists, as hostis humani generis, could be captured wherever they were found, by anyone who found them. Pirates are currently the only form of criminals subject to this special jurisdiction. Second, this definition would deter states from harboring terrorists on the grounds that they are "freedom fighters" by providing an objective distinction in law between legitimate insurgency and outright terrorism. This same objective definition could, conversely, also deter states from cracking down on political dissidents as "terrorists," as both Russia and China have done against their dissidents. Recall the U.N. definition of piracy as acts of "depredation [committed] for private ends." Just as international piracy is viewed as transcending domestic criminal law, so too must the crime of international terrorism be defined as distinct from domestic homicide or, alternately, revolutionary activities. If a group directs its attacks on military or civilian targets within its own state, it may still fall within domestic criminal law. Yet once it directs those attacks on property or civilians belonging to another state, it exceeds both domestic law and the traditional right of self-determination, and becomes akin to a pirate band. Third, and perhaps most important, nations that now balk at assisting the United States in the war on terror might have fewer reservations if terrorism were defined as an international crime that could be prosecuted before the International Criminal Court. For now, these possibilities remain distant. But there are immediate benefits to pointing out that terrorism has a precedent in piracy. In the short term, it is a tool to cut the Gordian knot of definition that has hampered antiterrorist legislation for 40 years. In the long term, and far more important, it provides the parameters by which to understand this current and intense conflict and the means within which it may one day be resolved. That resolution will begin with the recognition among nations that terrorism is a threat to all states and to all persons, the same recognition given to piracy in 1856. Terrorists, like pirates, must be given their proper status in law: hostis humani generis, enemies of the human race. Douglas R. Burgess Jr. is the author of Seize the Trident: The Race for Superliner Supremacy and How It Altered the Great War, published this year by McGraw-Hill. From checker at panix.com Thu Jul 28 21:44:50 2005 From: checker at panix.com (Premise Checker) Date: Thu, 28 Jul 2005 17:44:50 -0400 (EDT) Subject: [Paleopsych] WP: $41 Billion Cost Projected To Remove Illegal Entrants Message-ID: $41 Billion Cost Projected To Remove Illegal Entrants http://www.washingtonpost.com/wp-dyn/content/article/2005/07/25/AR2005072501605_pf.html By Darryl Fears Washington Post Staff Writer Tuesday, July 26, 2005; A11 A new study by a liberal Washington think tank puts the cost of forcibly removing most of the nation's estimated 10 million illegal immigrants at $41 billion a year, a sum that exceeds the annual budget of the Department of Homeland Security. The study, "Deporting the Undocumented: A Cost Assessment," scheduled for release today by the Center for American Progress, is billed by its authors as the first-ever estimate of costs associated with arresting, detaining, prosecuting and removing immigrants who have entered the United States illegally or overstayed their visas. The total cost would be $206 billion to $230 billion over five years, depending on how many of the immigrants leave voluntarily, according to the study. "There are some people who suggest that mass deportation is an option," said Rajeev K. Goyle, senior domestic policy analyst for the center and a co-author of the study. "To understand deportation policy response, we had to have a number." Advocates for tougher enforcement of immigration laws did not dispute the study's figures but disputed its assumptions about how enforcement would work. The study assumed that tougher enforcement would induce 10 percent to 20 percent of undocumented residents in the United States to leave voluntarily. But Mark Krikorian, executive director of the Center for Immigration Studies, which advocates stronger enforcement of immigration laws, argued that as many as half would leave voluntarily if the government were to aggressively seek them out and crack down on businesses that hire them illegally. "We do need to know what enforcement would cost," he said, "but [the study] is a cartoon version of how enforcement would work." The study estimates that it would cost about $28 billion per year to apprehend illegal immigrants, $6 billion a year to detain them, $500 million for extra beds, $4 billion to secure borders, $2 million to legally process them and $1.6 billion to bus or fly them home. Goyle said that he conducted the study, in part, to respond to conservative officials who have advocated mass deportations, in some cases immediately. Earlier this year, former House speaker Newt Gingrich advocated sealing U.S. borders and deporting all illegal immigrants within 72 hours of arrest. Will Adams, a spokesman for Rep. Tom Tancredo (R-Colo.), an outspoken advocate of stronger immigration laws, called the study an "an interesting intellectual exercise" by liberals that is "useless . . . because no one's talking about" employing mass deportation as a tactic. "No one's talking about buying planes, trains and automobiles to get them out of the country," Adams said. "The vast number of illegal immigrants are coming for jobs. Congressman Tancredo wants to go after the employers." From checker at panix.com Thu Jul 28 21:46:47 2005 From: checker at panix.com (Premise Checker) Date: Thu, 28 Jul 2005 17:46:47 -0400 (EDT) Subject: [Paleopsych] Richard Heinberg: The Likely Impact of Global Oil Peak on the United States Message-ID: Richard Heinberg: The Likely Impact of Global Oil Peak on the United States 20 May 2005 In Brief: Speech Delivered by Richard Heinberg at the ASPO Conference 2005 in Lisbon, Portugal Throughout most of the late 19th and early 20th centuries the US was the world's foremost oil producing and exporting nation; it was also the first important producing nation to pass its all-time oil production peak, which occurred in 1971. Thus America is emblematic for understanding world oil history and the approaching global extraction peak. While each nation will be impacted differently by global oil peak, the types of effects that are likely to be seen in the US can be extrapolated elsewhere; however, effects in this instance will be more pronounced because of America's extreme and arguably unmatched economic dependence on petroleum. America's original endowment of oil is estimated at somewhat less than 200 billion barrels, of which 170 billion (or about 90 percent) has been extracted (ASPO, 2002). Current production of conventional oil, including offshore areas and Alaska, is about 5.5 million barrels per day; non-conventional sources yield a little more than 2 million barrels per day. Present US consumption stands at 20 million barrels per day; imports account for nearly 60% of usage. (EIA, 2005) The US has the highest per-capita consumption of oil of any large country, and is the world's foremost oil user and importer. Well over 97% of US transportation energy comes from petroleum, and Americans are the most mobile people on the planet: there are more autos in the country than there are licensed drivers-about 210 million total. Americans drive an average of12,000 miles yearly at an average fuel efficiency of 20.8 miles per gallon (3.2 kilometers per liter) (EPA, 2005). Petroleum dependency has been systematically encouraged through suburban design and the lack of public transportation alternatives to the private automobile. The peak of per capita public transportation usage occurred in the 1940s; following this, the nation invested hundreds of billions of dollars in its Interstate Highway System, effectively a subsidy to the auto and oil companies; simultaneously, it invested heavily in civilian air transport while systematically dismantling its interurban rail and urban light rail systems. The US was also the center of modern agricultural developments-the widespread deployment of petrochemical fertilizers, pesticides, herbicides, and powered farm machinery-that have made the nation's food system overwhelmingly oil-dependent. Oil currently accounts for 40 percent of total US energy usage, making it the nation's primary energy source. Domestic production of natural gas, the nation's second most important energy source, is also in decline. The US has large domestic coal reserves; however quality is highly variable and a recent Hubbert curve analysis suggests a domestic production peak in as few as 20 years (Vaux, 2004). The nation derives 8 percent of its energy from nuclear power; that amount could be increased substantially, but the cost would be enormous and the development time would be considerable. Only 6percent of US energy production is from renewable sources, most of that being hydroelectricity and the burning of biomass, with solar, wind, tidal, and wave energy combined contributing less than one quarter of one percent. All of this is well known. What is less often discussed is the challenge that will be presented by global oil peak. The US was able to make up for its domestic oil peak by means of four primary strategies: * Importing more oil from other nations, * Relying on the US dollar denomination of global oil sales to bolster the value of the dollar and therefore to make imports artificially cheap, * Using military power to defend access to oil-producing regions and to enforce stability in those regions, * Partial efforts to increase energy efficiency. When global oil production peaks some of these strategies will likely begin to fail. Imports will become more expensive, in both absolute and relative terms. Of course, prices for oil itself will be much higher, but so will prices for nearly everything else (due to rising energy costs for manufacturing and transportation); thus consumer purchasing power will be strained, making higher fuel costs harder to absorb. At the same time, the continuing declining relative value of the dollar measured against other currencies will add to the real cost of fuel. The prevalent denomination of oil sales in US dollars may cease, due to the dollar's declining value, which is due to bloated US trade deficits, which are themselves at least partly attributable to the high rate of US oil imports. If oil does come to be sold more frequently for other currencies, this will merely add to the downward pressure on the dollar's value, creating a reinforcing feedback loop. America's military strategy in Iraq-which appears to be part of a larger design to dominate oil-producing regions globally-is already significantly challenged by armed resistance in that country. Attempts by the US to pursue a similar military strategy in other countries are likely to be resisted not only by the people of those countries but also by other nations averse to the notion of a unipolar world. China, Russia, India, Venezuela, and Iran appear to be engaging in economic and in some cases military alliances in an effort to counterbalance US hegemony in the Middle East, Central Asia, and Latin America, with the future of Africa also in dispute. Meanwhile the consequences of America's lack of vigor and thoroughness in pursuing energy efficiency and conservation domestically over the past two decades will hamper its ability to adapt to a low-energy future. Already Germany, Spain, Netherlands, and Japan have leapt far ahead of the US in per-capita production of solar and wind power. The US may find itself needing to invest heavily in new energy infrastructure at a time when its economy will be hard pressed to maintain emergency services for its increasingly unemployed and desperate population. The nation's relative success in its energy transition will thus hinge on whether the global peak occurs sooner (2005) or later (the extremely unlikely date of 2020), and whether leaders accept the energy transition as their immediate top priority and make maximum use of whatever time is left, or continue to postpone the effort (Hirsch et al., 2005). In the more likely case that peak occurs soon and few efforts at transition are made prior to the event, there will be profound economic impacts (Hirsch et al., 2005). Within years, the average American will have less opportunity, purchasing power, and mobility. Food will cost more and consumer choices will be severely constrained. Life expectancy may decline markedly, and America's cities will likely fall into decay. While US policy makers have squandered opportunities to avert such scenarios, even after the peak they will still face important choices, and their decisions will continue to be fateful both for US citizens and for the rest of the world. With regard to foreign policy, decision makers must choose whether to seek military solutions to what is essentially an economic problem. If they pursue militarism, this could set loose a chain of violence throughout western Asia, Africa, and South America. The ultimate consequences are frightening to contemplate. With regard to domestic policy, decision makers must choose whether and how to intervene in the economy. Economic contraction will occur, whether planned and coordinated or forced and improvised. If the government takes a hands-off approach, the suffering of the citizenry will be acute and will eventually lead to organized protests on a massive scale. Yet if the government chooses active strategies-rationing, creating employment in the agricultural sector, subsidizing alternatives, and mandating radical conservation measures-its efforts will still be subject to harsh criticism. Hence in either case it is likely that decision makers will respond by curtailing civil rights and expanding police powers. If the 20th century saw America's economic and geopolitical ascendancy, the 21st will almost certainly see its decline. The problems created for the US by peak oil will no doubt eventually be solved; however, the process will entail profound changes at every level of American society. Sources: ASPO Newsletter #23, November 2002 www.peakoil.ie/newsletters EIA (Energy Information Administration) web site, www.eia.doe.gov/ EPA (Environmental Protection Agency) web site, www.epa.gov/otaq/trends.htm Hirsch, Robert L., et al., "Peaking of World Oil Production: Impacts, Mitigation, & Risk Management" (SAIC, March 2005), www.energybulletin.net/4789.html Vaux, Gregson, "The Peak in US Coal Production" (FTW, 2004). From checker at panix.com Thu Jul 28 21:46:59 2005 From: checker at panix.com (Premise Checker) Date: Thu, 28 Jul 2005 17:46:59 -0400 (EDT) Subject: [Paleopsych] BBC: Japanese develop 'female' android Message-ID: Japanese develop 'female' android http://news.bbc.co.uk/go/pr/fr/-/1/hi/sci/tech/4714135.stm Published: 2005/07/27 09:10:07 GMT By David Whitehouse Science editor, BBC News website Japanese scientists have unveiled the most human-looking robot yet devised - a "female" android called Repliee Q1. She has flexible silicone for skin rather than hard plastic, and a number of sensors and motors to allow her to turn and react in a human-like manner. She can flutter her eyelids and move her hands like a human. She even appears to breathe. Professor Hiroshi Ishiguro of Osaka University says one day robots could fool us into believing they are human. Repliee Q1 is not like any robot you will have seen before, at least outside of science-fiction movies. She is designed to look human and although she can only sit at present, she has 31 actuators in her upper body, powered by a nearby air compressor, programmed to allow her to move like a human. We have found that people forget she is an android while interacting with her Prof Hiroshi Ishiguro "I have developed many robots before," Repliee Q1's designer, Professor Ishiguro, told the BBC News website, "but I soon realised the importance of its appearance. A human-like appearance gives a robot a strong feeling of presence." Designed to look human Before Repliee Q1, Professor Ishiguro developed Repliee R1 which had the appearance of a five-year-old Japanese girl. Its head could move in nine directions and it could gesture with its arm. Four high-sensitivity tactile sensors were placed under the skin of its left arm that made the android react differently to differing pressures. The follow-up has the appearance of a Japanese woman. To program her motion, a computer analysed the motions of a human and used them as a template for the way Repliee Q1 moves. She can be designed to follow the movement of a human wearing motion sensors or to act independently. "Repliee Q1 can interact with people. It can respond to people touching it. It's very satisfying, although we obviously have a long way to go yet." Professor Ishiguro believes that it may prove possible to build an android that could pass for a human, if only for a brief period. "An android could get away with it for a short time, 5-10 seconds. However, if we carefully select the situation, we could extend that, to perhaps 10 minutes," he said. "More importantly, we have found that people forget she is an android while interacting with her. Consciously, it is easy to see that she is an android, but unconsciously, we react to the android as if she were a woman." From checker at panix.com Thu Jul 28 21:47:13 2005 From: checker at panix.com (Premise Checker) Date: Thu, 28 Jul 2005 17:47:13 -0400 (EDT) Subject: [Paleopsych] Science Blog: Physicists Entangle Photon and Atom in Atomic Cloud Message-ID: Physicists Entangle Photon and Atom in Atomic Cloud http://www.scienceblog.com/cms/node/8532 Quantum communication networks show great promise in becoming a highly? secure communications system. By carrying information with photons or? atoms, which are entangled so that the behavior of one affects the other,? the network can easily detect any eavesdropper who tries to tap the system. Physicists at the Georgia Institute of Technology have just reached an? important milestone in the development of these systems by entangling a photon and a? single atom located in an atomic cloud. Researchers believe this is the? first time an entanglement between a photon and a collective excitation of? atoms has passed the rigorous test of quantum behavior known as a Bell? inequality violation. The findings are a significant step in developing? secure long-distance quantum communications. They appear in the July 22,? 2005 edition of the Physical Review of Letters. Relying on photons or atoms to carry information from one place to? another, network security relies on a method known as quantum? cryptographic key distribution. In this method, the two? information-carrying particles, photonic qubits or atomic qubits, are? entangled. Because of the entanglement and a rule in quantum physics that? states that measuring a particle disturbs that particle, an eavesdropper? would be easily detected because the very act of listening causes changes? in the system. But many challenges remain in developing these systems, one of which is? how to get the particles to store information long enough and travel far? enough to get to their intended destination. Photonic qubits are great? carriers and can travel for long distances before being absorbed into the? conduit, but theyre not so great at storing the information for a long? time. Atomic qubits, on the other hand, can store information for much? longer. So an entangled system of atoms and photons offers the best of? both worlds. The trick is how to get them entangled in a simple way that? requires the least amount of hardware. Physicists Alex Kuzmich and Brian Kennedy think that taking a collective? approach is the way to go. Instead of trying to isolate an atom to get it? into the excited state necessary for it to become entangled with a photon,? they decided to try to excite an atom in a cloud of atoms. Using a collective atomic qubit is much simpler than the single atom? approach, said Kuzmich, assistant professor of physics at Georgia Tech.? It requires less hardware because we dont have to isolate an atom. In? fact, we dont even know, or need to know, which atom in the group is the? qubit. We can show that the system is entangled because it violates Bell? inequality. With single atoms, its much more difficult to control the system because? there is so much preparation that must be done, said Kennedy, professor? of physics at Georgia Tech. For the collective excitation, the initial? preparation of the atoms is minimal. You dont have to play too much with? their internal state something thats usually a huge concern. In addition to having the system pass the rigorous test of Bell? inequality, researchers said they were able to increase the amount of time? the atomic cloud can store information to several microseconds. Thats? fifty times longer than it takes to prepare and measure the atom-photon? entanglement. Another challenge of quantum communication networks is that since photons? can only travel so far before they get absorbed into the conduit, the? network has to be built in nodes with a repeater at each connection. A very important step down the road would be to put systems like this? together and confirm they are behaving in a quantum mechanical way, said? Kennedy. From Georgia Institute of Technology Submitted by BJS on Tue, 2005-07-26 08:08. From checker at panix.com Fri Jul 29 15:46:30 2005 From: checker at panix.com (Premise Checker) Date: Fri, 29 Jul 2005 11:46:30 -0400 (EDT) Subject: [Paleopsych] AP: Homeland Security To Launch RFID Systems At Border Crossings Message-ID: Homeland Security To Launch RFID Systems At Border Crossings http://informationweek.com/shared/printableArticleSrc.jhtml?articleID=166403260 [Thanks to Eugen for this.] Five border posts with Canada and Mexico will get the systems, to track visitors driving in and out of the U.S. By Beth Duff-Brown, The Associated Press July 28, 2005 TORONTO (AP) --The U.S. Department of Homeland Security will install radio frequency technology at five border posts with Canada and Mexico to track foreigners driving in and out of North America. In its ongoing efforts to tighten border security and monitor possible terrorist and criminal activity, Bob Mocny of the Department of Homeland Security said the wireless chips for vehicles would become mandatory at designated border crossings in Canada and Mexico as of next Thursday. "This is a major transformation of how we are going to be gathering information about entries and exits along the border," Mocny said at a Wednesday news conference in Toronto. "The fundamental obligation of our government is to protect our citizens." After a foreigner entering the U.S. has passed a thorough security check once, they will be given a document containing the chip. This document will need to be renewed every six months. The document is meant to be placed on the dashboard of a car so that a person's personal information can be read as they approach a border crossing. Even with the radio frequency technology, however, the vehicle will still have to stop. If a person's identifying data produce no red flags, they will get just a cursory check at the border rather than lengthy questioning. Canadians and Mexicans, who fall under special immigration rules, are exempt from needing the chip. The mandatory program will apply, however, to all foreigners with U.S. visas--including those from the 27 countries whose citizens don't need visas for short U.S. visits--who cross into the United States at those points. The American Civil Liberties Union has expressed concern that the program violates privacy rights for "third country nationals" who fall under the program. Some immigrant groups also have argued that the technology would target Muslims and empower a growing society of surveillance. But Homeland Security officials insist weeding out potential terrorists, drug dealers, and other criminals from the innocent shoppers, truckers, and tourists who regularly cross the borders is a must. Members of the Canadian Race Relations Foundation met Wednesday with Mocny to discuss their concerns. They came away hoping the new technology may in fact help to fight racial and religious profiling. Karen Mock, the foundation's executive director, said she hears stories of people with "Middle Eastern-sounding names or darker complexions" being stopped and questioned frequently. She said technology could help by eliminating the possibility of stereotyping. "They are able to ensure that regardless of people's names or what other countries they've been visiting, that if they're frequent travelers and they've already been cleared and their data is fine, then they can move through it much more quickly," Mock told The Canadian Press. Radio frequency antennae have been installed at the border crossings at Thousand Islands Bridge in Alexandria Bay, New York; and Blaine, Washington, crossings for the Pacific Highway and Peace Arch. The technology will also be launched next week at two crossings between Mexico and Nogales, Arizona. The radio frequency program--known as RFID--is an expansion of the US-VISIT program, which was launched last year at 115 airports, 15 sea ports, and 50 of the busiest land border crossings along the U.S. borders with Canada and Mexico. Under the biometrics program, foreigners are fingerprinted and photographed and those details are fed into federal databases. Mocny said some 35 million people have gone through the program, which is set to expand to another 115 land crossings along the Canadian and Mexican borders by the end of this year. He said some 700 potential criminals with outstanding arrest warrants or whose activities raised red flags have been nabbed under the program that costs more than US$300 million (euro250 million) a year. From checker at panix.com Fri Jul 29 15:46:40 2005 From: checker at panix.com (Premise Checker) Date: Fri, 29 Jul 2005 11:46:40 -0400 (EDT) Subject: [Paleopsych] BBC: Implant chip to identify the dead Message-ID: Implant chip to identify the dead http://news.bbc.co.uk/go/pr/fr/-/1/hi/technology/4721175.stm Published: 2005/07/28 08:29:22 GMT [Thanks to Eugen for this.] The carnage inflicted by bomb attacks in Egypt, London and across Iraq has raised the problem of how the authorities identify people in an emergency situation. Whether through natural disaster or man-made, the killing of large numbers of people presents a great challenge to the emergency services, who have to identify the victims as quickly as possible. One aid to identification advocated by an American company is the VeriChip, a small device containing a unique number injected into a person's arm. During 11 September, some rescue workers, aware of the huge dangers they were facing, took to writing their badge number on their skin, in case they became victims themselves. Their attempts to ensure their own identity should the worst happen was spotted by New Jersey surgeon Richard Seelig. Five days later, he injected himself with two rice grain-sized chips, containing a unique number which could be used to identify him. "I wanted to demonstrate its effectiveness as being used as an identifier for people," Dr Seelig told BBC World Service's Analysis programme. "Also, I wanted to show it could be as comfortable for a person as not having one, so that it wouldn't interfere with that person's daily life." Losing anonymity Following the Asian tsunami which struck on Boxing Day 2004, many thousands of bodies could only be identified by DNA testing - a process that, in some cases, took months to complete. Similarly, following the bomb blasts on the London Underground, the process of identifying some bodies - particularly on the deep-lying Piccadilly Line - became very difficult, with some families upset by the amount of time it took to confirm a relative had died. VeriChip advocates argue it could help in these circumstances. Dr Seelig is now vice president for medical applications at VeriChip, which makes the devices - although it is yet to make a profit. For my personal goal of being identified in the case of an accident, it does work for me Dr John Halamka, Harvard Medical School He had been developing the device for more than a year before the 11 September attacks. The inspiration to develop it arose during his 20 years as a surgeon and the regular delays caused by patients unable to remember important healthcare information. He saw that the delays could be eliminated by marrying an identifier to link a person with healthcare information and Radio Frequency Identification (RFID). Dr Seelig see three major uses for the chip, all of which relate to the need for access to a patient's medical records. One is for individuals who have memory impairments, such as Alzheimer's, or those who are unable to speak, such as those who have suffered a stroke. It may also be very useful for those with chronic diseases, such as heart disease or epilepsy, who can suffer an attack almost instantaneously. Being able to access a person's medical records in such an event could be life-saving. And the third category, Dr Seelig said, is those who have sophisticated medical devices such as pacemakers, as the details of these devices are very advanced and difficult for someone who is not technically-minded to recall. Scanned and known Others are also taking note of the technology. The US Federal Drug Administration, which scrutinises all drugs and medical devices in the US, has given the chip its approval; officials in Mexico have already used the chip as a way of heightening security in sensitive areas; and the Harvard Medical School in Massachusetts now has several hospitals testing the device. The emergency room at one hospital has been fitted with readers so that anyone who has the chip can be scanned - but Harvard has not yet decided how much emphasis to put on the chip's use. As part of the trials, Dr John Halamka, the chief information officer at Harvard, has been fitted with the chip in the back of his arm. "In a sense I've lost my anonymity," he told Analysis. "Anywhere I go I can be scanned and known." However, he said he had been convinced by the chip. "The side effects have been none - the readability of the chip has been good," he added. "So for my personal goal of being identified in the case of an accident, it does work for me." Identity theft Others, however, are not as supportive. "It's a very scary technology," said Katherine Albrecht, a consumer rights analyst and founder of Caspian, a pressure group which opposes RFID. Ms Albrecht has been tracking the development of the VeriChip. "It's very de-humanising," she added. "I would no longer be known as a living, breathing, spiritual person but become known as a single number that would be emanating from a chip in my flesh... essentially becoming a form of human inventory, rather than a human being." She also argues that the chip is not secure - every time a reader is passed, the number is tracked, whether the user wishes this or not - and contends that being constantly identifiable is not necessarily a good thing. "A criminal could scan you surreptitiously, then use that information to access other information about you, and potentially do some identity theft," she said. "The other thing they could do is that, by scanning that number, it's actually quite a simple matter to capture the number and create your own chip with the same number in it. "You could simply programme a different chip, put it inside an encapsulated device, and put it in your own arm - and at that point you could pose as the individual whose identity you have chosen to steal." From checker at panix.com Fri Jul 29 15:46:47 2005 From: checker at panix.com (Premise Checker) Date: Fri, 29 Jul 2005 11:46:47 -0400 (EDT) Subject: [Paleopsych] Wired: Cave Pharming Yields Big Crops Message-ID: Cave Pharming Yields Big Crops http://www.wired.com/news/medtech/0,1286,67305,00.html Apr. 22, 2005 By Kristen Philipkoski It's not the bucolic, sun-dappled landscape you might envision when picturing American farmland. But a chilly, damp cave with no natural light just may be the most productive agricultural environment around. Purdue researchers and entrepreneur Doug Ausenbaugh didn't launch an underground farm because they thought it would yield more crops. They wanted to provide biotech companies a safe environment for growing crops containing pharmaceutical drugs for humans. But they were pleasantly surprised to find that not only did the former quarry apparently keep pollen from the corn, tobacco, soybeans, tomatoes and potatoes from escaping, but it also led to higher yields than greenhouses or outdoor fields. Some researchers believe that growing drugs in crops could be a cheaper and easier way to get biotech drugs than growing them in vats of genetically modified bacteria, as it's done today. But companies pursuing this approach have suffered setbacks due to government regulators, protests from environmental groups, and at least one incident in which a pharmaceutical crop nearly slipped into the food supply. Last year, Ausenbaugh founded Controlled Pharming Ventures to grow crops in a former quarry and underground warehouse, in the hope that it would reduce the risks inherent in "pharming." With the help of Purdue scientists and a grant from the Indiana 21st Century Research and Technology Fund, he seems at least to have proven that crops can grow robustly in a seemingly inhospitable 60-acre former limestone quarry in Marengo, Indiana. "We didn't know if there would be some trace contaminant or gas in the atmosphere that could have been a show stopper to normal crop growth and development," said Cary Mitchell, a Purdue horticulture professor, in an e-mail. "There wasn't. Things went smoothly." The average yield for the genetically modified corn (Bt corn, which contains a gene that produces a protein that kills larvae of the European corn borer) grown in the facility was 337 bushels per acre. The researchers also grew corn in a greenhouse, getting 267 bushels per acre. The average yield for field corn in the United States is just 142 bushels per acre. The researchers say they can achieve higher yields in the cave thanks to the controlled environment. Although it's more expensive to grow crops in an artificial environment, higher yields could help offset the cost. Mitchell says that if they can make the lighting system even more efficient, the cave system could revolutionize U.S. farming, whether it involves growing genetically modified or conventional crops. For example, he's working on a way to use plant debris as an energy source to feed the lighting system, which also helps ward off the chill of the cave. The system could even support organic farming, because fruits and vegetables could be grown without pesticides, since there are no insects in the cave. His potential customers include companies like Ventria Bioscience, whose pharming efforts have been held up by the USDA and concerns from farmers and environmental groups in states including California and Missouri. Environmentalists don't trust that the transgenic plants -- crops with foreign DNA added to their genome -- won't contaminate food crops. They point out that shipping mishaps, not pollen drift, have caused most contamination problems so far. So they don't think growing pharmaceuticals in a cave will solve that problem. If medications make their way to people for whom the drugs were never intended, the results could be disastrous. That almost happened in Nebraska in 2002 when ProdiGene accidentally mixed corn it said contained an animal vaccine with half a million bushels of soybeans meant for human consumption. All of the corn and soybeans had to be destroyed. In Iowa, ProdiGene corn cross-pollinated with 155 acres of conventional corn, which then had to be burned. "It's an issue of irresponsibility on the part of the biotech industry," said Bill Freese, a research analyst at Friends of the Earth. In March, the journal Nature reported that hundreds of tons of an unapproved genetically modified corn called Bt10 had entered the food and feed supply in the United States and overseas since 2001. Bt10 is not a pharmaceutical crop -- Swiss biotech Syngenta engineered the corn to produce pesticide. But Freese and others say this and other examples show that genetically modified crops are difficult to contain. "Most people would reasonably require that there's no chance that a pharmaceutical agent could contaminate the food supply," said Craig Culp, media director for the Center for Food Safety. "And the only way to do that is not engineering our food crops to produce pharmaceutical or chemical agents." From checker at panix.com Fri Jul 29 15:48:24 2005 From: checker at panix.com (Premise Checker) Date: Fri, 29 Jul 2005 11:48:24 -0400 (EDT) Subject: [Paleopsych] CBC: Nigel M. de S. Cameron: How we Lost "Bioethics" and How We Can Win it Back (and more) Message-ID: Nigel M. de S. Cameron: How we Lost "Bioethics" and How We Can Win it Back The Weekly Newsletter of the Center for Bioethics and Culture Network http://www.cbc-network.org/enewsletter/index_7_27_05.htm#article1 Remember the anguished hand-wringing in the 50s over Who lost China? Well, the biggest question facing Americans today is: Who lost bioethics? Because lost it we have. "Bioethics," a made-up word from the early 70s, covers everything from abortion to euthanasia to stem cells and cloning. It's the debate about medicine and ethics and biology and public policy. It's also a pseudo-profession, culled from the ranks of philosophers and docs and biology profs that gives them their 15 minutes of fame on network TV. Guru-in-chief is Art Caplan of the University of Pennsylvania and his wannabe successor Glenn McGee, whose six-figure book deal shows his star quality. Caplan is a charming man, smart, suave, and when I have debated him, sometimes given to surprising candor. One time an anchor translated my more measured terms and asked him if an egregious pro-cloning statement by Michael West (who claimed to have cloned the first US human embryos) was b*** s***. He had the decency to agree. And he is influential. When the United Nations started its 3-year debate on human cloning I was invited to serve as bioethics adviser on the US delegation. Guess which American the UN itself invited to be the cloning "expert" who lectured representatives of every nation in the world as part of a five-member panel? Caplan. Arthur Caplan is the quintessential face of contemporary bioethics. Yet he does not in any way represent the American people. How did bioethics get so out of whack with the people? How did it switch from a Hippocratic focus on the sanctity of life to a public relations department for whatever the biotech industry wants to do next? In the wide-ranging book on The Secular Revolution (edited by Christian Smith, 2003), one chapter lets John H. Evans explain the bioethics story - how something so close to the hearts of religious Americans on such vital issues ended up almost entirely in the hands of the secular elite. One central problem, of course, is that we walked out. There is no question that a chief agent of secularization in American culture has been "conservative" Christians. They have withdrawn from the fray faster than anyone has pushed them out. And there is no better example than in the field of bioethics. If here, where human life is most immediately at stake - and where we have deployed such energetic political and caring resources to the question of abortion - we have failed to develop expertise and leadership, is it a surprise that in other areas of the culture we keep sensing that we are losing it? this article will be continued next week _________________________________________________________________ The Human Future: What is "the human future?" What does it mean? When there are enough issues crowding into our daily lives as it is, why should we think about such a seemingly irrelevant philosophical discussion as our "human future?" Well, because as Dr. Cameron so poignantly pointed out above, the issues related to the taking, making, and faking of human life are the issues that will dominate the 21st Century. These are not philosophical in nature. These issues are at the forefront of the scientific communities' agenda and have the potential for doing much good and much harm. Much good, by relieving human suffering, and much harm by devaluing the inherent dignity of all human beings. Unfortunately, if you have been following the news lately you will see how a [10]utilitarian based science has dominated the discussion. These articles on [11]Eugenics, [12]Euthanasia, [13]Stem Cell Research, and [14]Egg Donation are only a few to show you that much is at stake for "The Human Future." "The Human Future," then, is about raising the red flag when human dignity is at stake, and it is about grounding science in moral responsibility. Even more importantly, it is about celebrating the beauty and complexity of human life in all of its various stages from the zygote to the death bed and in that way securing a human future for us and the generations beyond us. CBC is about equipping people to face the challenges of the 21st Century and we use all the tools necessary to raise awareness about these issues. We host events, debates, we offers resources and much more. We offer you as many opportunities as we can to engage yourself and those you know in these discussions. References 10. http://en.wikipedia.org/wiki/Utilitarianism 11. http://www.cbc-network.org/redesigned/research_display.php?id=221 12. http://www.cbc-network.org/redesigned/news_display.php?id=359 13. http://slate.msn.com/id/2123269/entry/2123270/ 14. http://www.temple-news.com/media/paper143/news/2005/02/22/Opinion/Prevalence.Of.Egg.And.Sperm.Donation.Trivializes.Life-872573.shtml From checker at panix.com Fri Jul 29 15:48:33 2005 From: checker at panix.com (Premise Checker) Date: Fri, 29 Jul 2005 11:48:33 -0400 (EDT) Subject: [Paleopsych] NYT: The Dinosaur That Walked on Two Legs May Have Started Life on Four Message-ID: The Dinosaur That Walked on Two Legs May Have Started Life on Four http://www.nytimes.com/2005/07/29/science/29dino.html?pagewanted=print By [3]JOHN NOBLE WILFORD For several years scientists have been finding fossilized embryos of dinosaurs from 80 million to 100 million years ago. They have now uncovered several 190-million-year-old dinosaur embryos, the oldest ever found. The discovery is being reported today in the journal Science by a team of paleontologists headed by Robert Reisz of the University of Toronto. The fossils were actually excavated in 1978 in South Africa, but it has taken this long to expose the embryos from the surrounding rock and eggshell and then interpret the tiny remains. One of the best preserved embryo skeletons was still curled up inside an egg that was less than three inches long. The scientists identified the embryos as belonging to a long-necked, short-tailed, plant-eating dinosaur called Massospondylus. They were relatively common in what is now South Africa in the beginning of the Jurassic period. All previous dinosaur embryos have been from the Cretaceous period, which ended 65 million years ago. As adults, these creatures reached lengths of more than 15 feet and were able to walk on two legs. Yet the new research suggested that their hatchlings began life moving about on all fours, the scientists reported. Dr. Reisz and his colleagues came to this surprising conclusion from a detailed examination of the horizontal neck, heavy head and limb proportions of two well-preserved embryo skeletons. This appeared to mean that the young were quadrupeds and somehow matured into bipeds, a pattern of development, they said, that was almost unheard of among vertebrates. "The results have major implications for our understanding of how these animals grew and evolved," Dr. Reisz said. Dr. Reisz, a professor of biology, was joined in reporting the research by other Toronto scientists and dinosaur experts at the National Museum of Natural History of the Smithsonian Institution and the University of Witwatersrand in Johannesburg. James Clark, a paleontologist at George Washington University, who was not involved in the research, said the discovery was "exciting in providing a major piece of the puzzle" of how large plant-eating dinosaurs reproduced and, in at least one case, started life on four legs and grew to be two-legged animals. From HowlBloom at aol.com Sat Jul 30 06:14:49 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Sat, 30 Jul 2005 02:14:49 EDT Subject: [Paleopsych] Re: on islam Message-ID: <128.61c67403.301c74d9@aol.com> In a message dated 7/29/2005 11:40:19 AM Eastern Standard Time, psaartist at hotmail.com writes: Sorry about that. I just put it into the text of this email- Hello Howard, I?m afraid this week I have failed outright as a research assistant, but I have been ruminating and writing down a few impressions about Islam with your project in mind. The following ideas were written down over the course of several days and are in varying degrees of wholeness. I hope some of them might help you to see the shapes of your own ideas, either in the light of my good ones or against the shadows of my bad ones. I did find some dissidents in Islam through my brief search on Google: http://www.islamreview.com/articles/islamapostasy.shtml http://www.bharat-rakshak.com/SRR/Volume11/gayatri.html I think someone in the online conference asked for a Muslim Ghandi. The second of these links answers to that, the Muslim ?Frontier Ghandi?. Reading their stories tends to make Islam look worse rather than better, though. Basically because of the lies, threats, violence and hypocrisy they had to try to endure. In some cases with Western collaboration. Fundamentally, every power system instinctively hates a dissident. I think Islam has a different ?flavor?, a different character when it comes to ideas of authority, law, censure, belonging and transgression. Its basic spirit, to which it so often returns, seems to be legalistic (subject to interpretation and manipulation) and authoritarian rather than humane. It is closed and fearful of challenge rather than open. It is not merciless, but even the power of mercy is used as a demonstration of authority. Part of this comes from the tribal and ethnic environments where Islam took hold. hb: these are extremely good observations. Punishment and robotic devotion to the duties of the religion are prized. Creativity and individuality are minimized. In Bloomian terms, it's a religion whose conformity enforcers outweigh its diversity generators. That may well be why it has fared so poorly against the West ever since the Industrial Revolution and the Scientific Spirit blossomed in Europe around 1750. The West has trouble dealing with Islam because the West has trouble dealing with religion. Philosophically or ethically, Christian or Jewish fundamentalists can face fundamentalist Islam only on the most impoverished terms- ?You are devils, we are angels?. The contest is therefore left to be decided in terms of pure violence. Liberal Westerners, secularists, avoid making any definitive criticism of a religion or a religious person because that would violate ideas of understanding, tolerance, inclusivenes, and humility. In a sense they are determined not to seek a decision to the contest. Each half of the West, the religious and the secular, is hamstrung because it knows it can not count on the support of the other. I?m sure the Jihadists are delighted to observe this. hb: extremely good observations, Peter. The attractions of the Enlightenment, the positive example, are being dimmed and perverted by the excesses (or the essence) of the war on Terror, which is a radical, lawless, barbarizing Western Jihad created by fundamentalists. One might wish for a tough, clear-minded secularist player on the scene. hb: I'm trying to be one voice for this approach. But my voice is a mere squeak at this point, limited primarily to books and to a fairly steady run of national talk radio appearances. The EU? The French, who ban all forms of religious expression from their schools? I don?t know how sucessful any of their efforts are. A side question- given economic prosperity, education, and a fairly stable long term social situation, will people from Muslim cultures behave like a mirror image of secularist westerners? hb: which Western model do you mean-- 1) the revolutionary Marxists who clubbed the brains out of the Russians and the Chinese until 1989 and killed 80 million people in the process, but who fed the need of humans in their teens and twenties to rebel against their parents? 2) or the folks who have gone to their jobs and been part of "the system", a system whose rich, rich rewards they fail to see? Opening the eyes of Westerners to the benefits and future uplift of "the system" is what Reinventing Capitalism: Putting Soul In the Machine--A Radical Reperception of Western Civlization is all about. That's my half-completed next book. One might be considered a racist for asking the question, of for necessarily expecting either a yes or a no answer to it. hb: but this social ritual we call "condemning racism" is another of the perceptual throttles you refer to above. To paraphrase you, we forbid certain forms of thought in order to maintain what we cherish, pluralism, tolerance, and free speech. And we are easily led to think that thoughts vital to our survival fall into the forbidden zones...even when they don't. Islam is not a race! To put it another way, is the Western secularist ?flavor? a sort of inevitable world-historical ideal? hb: nothing is inevitable. Winston Churchill proved that with a sufficient exertion of will and perseverance, we humans can change history. Lenin before him proved the same thing. And the rise of the ideas of a relatively anonymous thinker, Carl Marx, proves that multi-generational projects kept aloft by generations of stubborn persistence can make massive differences in the path that social evolution takes. Even if those paths lead to dead ends 150 years down the line (from 1848, when Marx and Engels issued the Communist Manifesto to 1989, when the Berlin Wall fell). Is it an idea of the same species as the inevitable Marxist worldwide revolution? That is, an assumption taken for granted by its adherents (the way fish don?t notice the water they swim in), but about to encounter a big world that has other plans? Me, I don?t have an answer to these questions. hb: in the terms of my second book, Global Brain, the battle between groups, the battle between civilizations, represents a test of alternative hypotheses in the group mind. Unfortunately, the groups that fare best in battle are sometimes the worst at running a society at peace. But that easy generalization may not be true. The society that wins is the one with the greatest will, the highest morale, and the most unending supply of resources. Germany and Japan ran out of resources. The Allies had America, a resource bonanza way back then. So the Allies won WWII. Same thing happens in contests between lizards, lobsters, or stags for dominance. The animal with the highest degree of confidence and the largest reserve of resources wins. Which leads to a question. What is confidence? Based on the work of Neil Greenberg with anolis lizards, I'd say confidence is an emotional and perceptual setting that allows a creature: 1) to see opportunities in slices of reality others would regard as threatening 2) to maintain a sense of perceived control 3) to hang on to the serotonin and dopamine settings these perceptions produce 4) to avoid the non-stop flood of stress hormones that poison an animal's ability to outdo others at shows of majesty, decisiveness, calm under pressure, and implied menace 5) to use the stress hormones only in actual combat, when those hormones are arousers, not poisons In other words, perception, physiology, group organization, and resources work hand in hand to produce winners and losers, winners of intergroup tournaments or winners of personal struggles. In humans, it helps to have a worldview that allows you percieve the riches in what you've got and that help you see how what you do contributes to something larger than yourself, something that uplifts your people as a group and that uplifts the individual people around you. If you've got that, you can tap the hormonal cocktail of idealism. What's more, your perceptions influence your resources. The Tasmanians died when they'd hunted down all the land animals on their island. They died of starvation. Why? Their worldview, their collective perceptual system, told them that the animals of the sea, fish and other seafood, were inedible trash. The Japanese lived on island more impoverished than those of the Tasmanians. But they saw everything around them as a useable resource. So the Tasmanian perceptual system killed its people off. The Japanese perceptual system has been a winner for roughly 1,300 years (roughly the same amount of time that Islam has been a worldview using the people it manipulates, empowers, and motivates as a test vehicle). Islam is poor at seeing resources in rubble. But it's very good at organized violence and unconventional warfare. Can a worldview that impoverishes its people to stoke their sense of victimization and their need for revenge, for justice, and for the purity of god's own laws beat a worldview that has created relative wealth even for its poorest citizens? (One of our local homeless men gets his food and coffee at Starbucks and gourmet delis, owns a bicycle, supports the luxury of god-knows-what-self-destructive-habit, and has access to trash that's the equivalent of treasure even to me.) Only if this civilization fails to perceive the riches it creates--the spiritual riches that come from "consumerism" and "materialism". And only if this civlization fails to perceive the riches in what it now discards as trash--passion, emotion, and empathy--the things that we need most to upgrade the jobs we go to every day and to upgrade the companies that give us those jobs. That, too, is the goal of Reinventing Capitalism. To get us Tasmanians to see the riches in the seas around us. Is Islam at all compatible with democracy? hb: yes. The Iranians are proving that, despite the headlock the conservative mullahs have on the current government...and may have for another ten years or so, but may eventually lose as the old guard of the 1979 Revolution dies out. The Lebanese are now trying to prove that Islam and democracy can work together, too. That remains to be seen. Where democracy and Islam mix, will one of them have to become essentially denatured? hb: bureaucracy, whether it's at the DMV here in Brooklyn or in the government of Hosni Mubarak in Cairo, needs the perceptual upgrade of reinventing capitalism. Bureaucracies have to be restructured so that bureaucrats know that their task is to use their hearts--their empathic passions--to defend and advance other humans. What are today the various layers in Islam of what is, what is desired, and what cannot be expressed? hb: Islam is out to achieve its "just place" in the hierarchy of cultures--the number one slot. That's what God has promised. That's what God says Islam must be--number one, top dog. And God has said that if that requires "making wide slaughter in the land" (a key phrase and a key message in the Koran and the Hadith, the additional Islamic holy books), then so be it. Social standing often means more than food and water to individual humans and to human groups. Howard, in one of the Sunday night online conferences you described Islam as having an anti-art stance, and you seemed to see Persian representational (that is, pictorial) art as a minor exception that only serves to prove the rule. It is indeed true that many Islamic cultures prohibit any earthly thing from being depicted, and some prohibit music to varying degrees. However, a moment?s reflection should bring to mind the many centuries of Islamic development in many arts, in some areas to the highest point. For example, weaving, architecture, design, ceramics, poetry, music, calligraphy. The Quran itself in its recital is a consummate work of literary, poetic and performative art. hb: I've been counting Islam's contributions today, and compared to those of the West they are scant. Islam has given us fabulous architecture, architecture based on Western models, fabulous calligraphy, and one fabulous book--The Thousand and One Arabian Nights. The Koran is considered the epitome of literature, God's own verses, in the world of Islam. But read it in English and it comes across as a primitive hash. Many of the things attributed to Arabs and Islam are borrowings--Arab numerals, for instance, which are Indian, not Arabic. However Islamic culture provided a vital transit point that quickened the commerce, the interchange, of styles and ideas, giving westerners the silks of China, the ceramics of China, the mathematics of India, and the literature and philosophy of classical Greece (a literature the West lost track of until the tenth century, when it trickled from Moslem Spain into Christian Europe). Arabs invented a new form of sea-faring, using the triangular sail (the lateen sail) to tack into the wind and inventing a way to harvest the catastrophe of the Monsoon winds to make annual trips by sea from Oman and Yemen to India and to the Spice Islands, the islands of Java, Malacca, the Maldives, Sumatra, Aceh, the Philippines, and Zanzibar, not to mention the ports of Mombassa. They invented a commerce in black African slaves that defies belief. We Westerners uprooted ten million black Africans and used them in our slave business. That is appalling and is justly labeled a "Black Holocaust". Moslem traders from Arabia and India uprooted 140 million black Africans. That's fourteen African Holocausts! The Western slave trade imposed such monstrous conditions on its captives that one out of every ten seized from their homes died somewhere in transit. The Moslem slave trade imposed such monstrous conditions that only nine out of ten Africans attacked and/or captured DIED. That brings the death toll of of the Black Islamic Holocaust up to the level of 120 Black Western Holocausts. 126 million deaths--the number inflicted by Moslem slave traders and slave raiders--is the equivalent of 21 of the Holocausts inflicted on the Jews by the Nazis. It's twice the combined death tolls of World War I and of World War II--the two most industrialized uses of killing machines known to man, wars in which two atomic bombs were loosed on civilian populations. And we are supposed to believe that decrying this turning of more than half a continent into a killing field, this mass merchandising of black humans in which all males were killed or castrated, this mass deportation of a race in dhows packed so solidly with human cargo that many of those crammed into the seafaring vessels of Arab merchants died of suffocation, this trade whose ship captains combed their cargo before entering a port to search out the weak and the ill, then to throw this faulty merchandise overboard to avoid paying import taxes on humans too feeble to sell, this trade in females and young boys as sex slaves, we are supposed to believe that denouncing this or even researching its details is a racist crime? And we are told this by apartheid states like Saudi Arabia which I, as a Jew, can not enter? Why? Because of my tribal identity, my race, my Jewish genes, my Jewish blood, and my Jewish geneology. And we Jews are supposed to believe that we, who often live peacefully among Arabs and Moslems as I did when living outside the Arab town of Afullah and as I do in a Brooklyn neighborhood riddled with mosques, mosques in which bomb plots to destroy the World Trade Center and to destroy the New York Subway and rail system have been hatched, we are supposed to believe that I am some sort of Nazi who lobbies for apartheid? This is a violently perverse perception, one which we voluntarily enforce. It is a system of censorship which we gladly encourage, often for the reasons you have pinpointed, because it fits our sense of fairness and tolerance. But is it really fair to decry our murders and to close our eyes to piles of bodies far higher than any we have ever erected? If we are ethical and prize human life, isn't it incumbent on us to open our eyes and to decry both Islams's crimes and ours? Or are we here to inflict so much guilt on ourselves that we kill the civilization that has given even Moslems in the slums of Cairo TVs and radios. Should we really condemn the mix of capitalism and open criticism that has given spoiled Moslem middle class and rich kids like Osama bin Laden and his foot soldiers computers and cell phones? Should we despise the civilization that has brought ordinary Japanese, Koreans, Taiwanese, Thais, Philippinos, Indians, and, now, Chinese from starvation to wealth beyond the power of 19th Century kings? Should we overthrow a Western system that has produced the anti-slavery movement, the anti-imperialism movement, the human rights movement, the environmental movement, Greenpeace, Amnesty International, the ACLU, NASA, solar energy, hybrid vehicles, and the first steps toward a possible hydrogen economy? This Islamic material is what I'm working on for the Tenth Anniversary Edition of The Lucifer Principle: A Scientific Expedition Into the Forces of History. And the reperception of the Western System is the raison d'etre of Reinventing Capitalism: Putting Soul In the Machine--A Radical Reperception of Western Civilization. All thanks for some stunningly good insights, Peter. I've read the rest and have made one more comment. You are a good, good thinker. But this is where my energy ebbed. Onward The reason I raise this issue is that I see a danger of a sort of ?momentum? of negative criticism when one looks at Islam and its many problems. To find oneself convinced that Islamic expression is against art is to have lost one?s bearings in the argument. That would be equvalent to thinking that Chartres cathedral is against art. Perhaps the issue in Islamic art that offends a child of the enlightenment, (I will presume to put us in that category) is that Islamic expression seems consistently to be against independence. This issue, so important to us, may make us look at the work as deficient and backward. I think what we are really seeing is that the work totally refuses to participate in the Western modern project. I?m thinking of that project as secular, humanist, trying to explore without a predetermined destination. As much of the world has taken on this modernist (and post-modern, etc.) quality, and as things in general are made industrually rather than by hand, Islamic art has been uprooted and stifled and as far as I know hasn?t produced anything of fulfilling greatness in our era. One exception could be Islamic (Sufi) music, which seems to be in quite a healthy state, as the late Nusrat Fateh Ali Khan and others have recently and abundantly demonstrated. On learning and writing about Islam- I have gone through a couple of amateurish phases of trying to learn about Islam, and have always ended up feeling that my conclusions have missed the essential point of what I was trying to grasp. To say that Islam is multi-dimensional is an understatement: I?ll say that it is beyond multidimensional. In a way it is even misleading to say that Islam is ?a religion?, much less to say that it is ?a religion politicised? or anything along those lines. I think it is a phenomenon that transcends ?religion? at all times and under all circumstances. This is because the essence of its project is the necessary capture and integration of the social, ethical, ritual, religious, spoken, gender, philosophical, legal, familial, and other spheres, on and on. (I am not here making any claims about the justness or even the real feasibility of this project.) hb: as Osama says, Islam is about unity, it is about one-ness. One God. One code of laws. Encorporating all aspects of human life in that holy One. This is a phenomenon not properly equivalent to, say, Christian fundamentalism. (I don?t know anything about Jewish fundamentalism so I won?t try to compare.) I say it is not equivalent because its foundation, the Islamic tradition, is a much more intensely sophisticated cultural machine. This is not to say Islam will always win in a battle of conversions, or that its exegetes were smarter or more perceptive than the Christians?. I am just suggesting that Islam?s nature is to recognize, emphasize and penetrate more profoundly into more spheres of experience than does Christianity. In writing on Islam is that one might want to restrict oneself to a specific slice, say, the jihadis, without casting an endlessly wide net for contributing factors to the phenomenon. A problem arises because it?s like being asked to describe just one cell in a highly developed brain, along with, oh, only the cells directly connected to it. (By cells I?m not representing terrorist units, but historical factors.) Well, to understand how that cell arose and is functioning, you have to understand the function of every cell it?s connected to, and generally the reader has minimal grasp of any of those functions, so you have to explain those, and so on. It?s a problem because, upon examination, each cell turns out not to be doing quite what you would have assumed. It?s not a matter of exoticism and mystery, it?s just a huge amount of information we never knew existed. So you categorize and abstract and interpolate, as any writer must, but with every abstraction you risk a distortion. One feels like the blind man asked to describe an elephant. He can only report on the limited part he has been able to reach. ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Sat Jul 30 15:25:36 2005 From: checker at panix.com (Premise Checker) Date: Sat, 30 Jul 2005 11:25:36 -0400 (EDT) Subject: [Paleopsych] NYTBR: Richard Posner: Bad News Message-ID: Richard Posner: Bad News http://www.nytimes.com/2005/07/31/books/review/31POSNER.html ["Up Front" attached] BOOKS DISCUSSED IN THIS ESSAY Press Bias and Politics: How the Media Frame Controversial Issues, by Jim A. Kuypers. Praeger. Paper, $28.95. All the News That's Fit to Sell: How the Market Transforms Information Into News, by James T. Hamilton. Princeton University. $37.95. The Future of Media: Resistance and Reform in the 21st Century, edited by Robert W. McChesney, Russell Newman and Ben Scott. Seven Stories. Paper, $19.95. Coloring the News: How Political Correctness Has Corrupted American Journalism, by William McGowan. Encounter. Paper, $16.95. Now They Tell Us: The American Press and Iraq, by Michael Massing. New York Review. Paper, $9.95. What Liberal Media? The Truth About Bias and the News, by Eric Alterman. Basic Books. Paper, $15. Bias: A CBS Insider Exposes How the Media Distort the News, by Bernard Goldberg. Perennial/ HarperCollins. Paper, $13.95. Weapons of Mass Distortion: The Coming Meltdown of the Liberal Media, by L. Brent Bozell III. Three Rivers. Paper, $13.95. THE conventional news media are embattled. Attacked by both left and right in book after book, rocked by scandals, challenged by upstart bloggers, they have become a focus of controversy and concern. Their audience is in decline, their credibility with the public in shreds. In a recent poll conducted by the Annenberg Public Policy Center, 65 percent of the respondents thought that most news organizations, if they discover they've made a mistake, try to ignore it or cover it up, and 79 percent opined that a media company would hesitate to carry negative stories about a corporation from which it received substantial advertising revenues. The industry's critics agree that the function of the news is to inform people about social, political, cultural, ethical and economic issues so that they can vote and otherwise express themselves as responsible citizens. They agree on the related point that journalism is a profession rather than just a trade and therefore that journalists and their employers must not allow profit considerations to dominate, but must acknowledge an ethical duty to report the news accurately, soberly, without bias, reserving the expression of political preferences for the editorial page and its radio and television counterparts. The critics further agree, as they must, that 30 years ago news reporting was dominated by newspapers and by television network news and that the audiences for these media have declined with the rise of competing sources, notably cable television and the Web. The audience decline is potentially fatal for newspapers. Not only has their daily readership dropped from 52.6 percent of adults in 1990 to 37.5 percent in 2000, but the drop is much steeper in the 20-to-49-year-old cohort, a generation that is, and as it ages will remain, much more comfortable with electronic media in general and the Web in particular than the current elderly are. At this point the diagnosis splits along political lines. Liberals, including most journalists (because most journalists are liberals), believe that the decline of the formerly dominant ''mainstream'' media has caused a deterioration in quality. They attribute this decline to the rise of irresponsible journalism on the right, typified by the Fox News Channel (the most-watched cable television news channel), Rush Limbaugh's radio talk show and right-wing blogs by Matt Drudge and others. But they do not spare the mainstream media, which, they contend, provide in the name of balance an echo chamber for the right. To these critics, the deterioration of journalism is exemplified by the attack of the ''Swift boat'' Vietnam veterans on Senator John Kerry during the 2004 election campaign. The critics describe the attack as consisting of lies propagated by the new right-wing media and reported as news by mainstream media made supine by anxiety over their declining fortunes. Critics on the right applaud the rise of the conservative media as a long-overdue corrective to the liberal bias of the mainstream media, which, according to Jim A. Kuypers, the author of ''Press Bias and Politics,'' are ''a partisan collective which both consciously and unconsciously attempts to persuade the public to accept its interpretation of the world as true.'' Fourteen percent of Americans describe themselves as liberals, and 26 percent as conservatives. The corresponding figures for journalists are 56 percent and 18 percent. This means that of all journalists who consider themselves either liberal or conservative, 76 percent consider themselves liberal, compared with only 35 percent of the public that has a stated political position. So politically one-sided are the mainstream media, the right complains (while sliding over the fact that the owners and executives, as distinct from the working journalists, tend to be far less liberal), that not only do they slant the news in a liberal direction; they will stop at nothing to defeat conservative politicians and causes. The right points to the ''60 Minutes II'' broadcast in which Dan Rather paraded what were probably forged documents concerning George W. Bush's National Guard service, and to Newsweek's erroneous report, based on a single anonymous source, that an American interrogator had flushed a copy of the Koran down the toilet (a physical impossibility, one would have thought). Strip these critiques of their indignation, treat them as descriptions rather than as denunciations, and one sees that they are consistent with one another and basically correct. The mainstream media are predominantly liberal - in fact, more liberal than they used to be. But not because the politics of journalists have changed. Rather, because the rise of new media, itself mainly an economic rather than a political phenomenon, has caused polarization, pushing the already liberal media farther left. The news media have also become more sensational, more prone to scandal and possibly less accurate. But note the tension between sensationalism and polarization: the trial of Michael Jackson got tremendous coverage, displacing a lot of political coverage, but it had no political valence. The interesting questions are, first, the why of these trends, and, second, so what? The why is the vertiginous decline in the cost of electronic communication and the relaxation of regulatory barriers to entry, leading to the proliferation of consumer choices. Thirty years ago the average number of television channels that Americans could receive was seven; today, with the rise of cable and satellite television, it is 71. Thirty years ago there was no Internet, therefore no Web, hence no online newspapers and magazines, no blogs. The public's consumption of news and opinion used to be like sucking on a straw; now it's like being sprayed by a fire hose. To see what difference the elimination of a communications bottleneck can make, consider a town that before the advent of television or even radio had just two newspapers because economies of scale made it impossible for a newspaper with a small circulation to break even. Each of the two, to increase its advertising revenues, would try to maximize circulation by pitching its news to the median reader, for that reader would not be attracted to a newspaper that flaunted extreme political views. There would be the same tendency to political convergence that is characteristic of two-party political systems, and for the same reason - attracting the least committed is the key to obtaining a majority. One of the two newspapers would probably be liberal and have a loyal readership of liberal readers, and the other conservative and have a loyal conservative readership. That would leave a middle range. To snag readers in that range, the liberal newspaper could not afford to be too liberal or the conservative one too conservative. The former would strive to be just liberal enough to hold its liberal readers, and the latter just conservative enough to hold its conservative readers. If either moved too close to its political extreme, it would lose readers in the middle without gaining readers from the extreme, since it had them already. But suppose cost conditions change, enabling a newspaper to break even with many fewer readers than before. Now the liberal newspaper has to worry that any temporizing of its message in an effort to attract moderates may cause it to lose its most liberal readers to a new, more liberal newspaper; for with small-scale entry into the market now economical, the incumbents no longer have a secure base. So the liberal newspaper will tend to become even more liberal and, by the same process, the conservative newspaper more conservative. (If economies of scale increase, and as a result the number of newspapers grows, the opposite ideological change will be observed, as happened in the 19th century. The introduction of the ''penny press'' in the 1830's enabled newspapers to obtain large circulations and thus finance themselves by selling advertising; no longer did they have to depend on political patronage.) The current tendency to political polarization in news reporting is thus a consequence of changes not in underlying political opinions but in costs, specifically the falling costs of new entrants. The rise of the conservative Fox News Channel caused CNN to shift to the left. CNN was going to lose many of its conservative viewers to Fox anyway, so it made sense to increase its appeal to its remaining viewers by catering more assiduously to their political preferences. The tendency to greater sensationalism in reporting is a parallel phenomenon. The more news sources there are, the more intense the struggle for an audience. One tactic is to occupy an overlooked niche - peeling away from the broad-based media a segment of the consuming public whose interests were not catered to previously. That is the tactic that produces polarization. Another is to ''shout louder'' than the competitors, where shouting takes the form of a sensational, attention-grabbing discovery, accusation, claim or photograph. According to James T. Hamilton in his valuable book ''All the News That's Fit to Sell,'' this even explains why the salaries paid news anchors have soared: the more competition there is for an audience, the more valuable is a celebrity newscaster. The argument that competition increases polarization assumes that liberals want to read liberal newspapers and conservatives conservative ones. Natural as that assumption is, it conflicts with one of the points on which left and right agree - that people consume news and opinion in order to become well informed about public issues. Were this true, liberals would read conservative newspapers, and conservatives liberal newspapers, just as scientists test their hypotheses by confronting them with data that may refute them. But that is not how ordinary people (or, for that matter, scientists) approach political and social issues. The issues are too numerous, uncertain and complex, and the benefit to an individual of becoming well informed about them too slight, to invite sustained, disinterested attention. Moreover, people don't like being in a state of doubt, so they look for information that will support rather than undermine their existing beliefs. They're also uncomfortable seeing their beliefs challenged on issues that are bound up with their economic welfare, physical safety or religious and moral views. So why do people consume news and opinion? In part it is to learn of facts that bear directly and immediately on their lives - hence the greater attention paid to local than to national and international news. They also want to be entertained, and they find scandals, violence, crime, the foibles of celebrities and the antics of the powerful all mightily entertaining. And they want to be confirmed in their beliefs by seeing them echoed and elaborated by more articulate, authoritative and prestigious voices. So they accept, and many relish, a partisan press. Forty-three percent of the respondents in the poll by the Annenberg Public Policy Center thought it ''a good thing if some news organizations have a decidedly political point of view in their coverage of the news.'' Being profit-driven, the media respond to the actual demands of their audience rather than to the idealized ''thirst for knowledge'' demand posited by public intellectuals and deans of journalism schools. They serve up what the consumer wants, and the more intense the competitive pressure, the better they do it. We see this in the media's coverage of political campaigns. Relatively little attention is paid to issues. Fundamental questions, like the actual difference in policies that might result if one candidate rather than the other won, get little play. The focus instead is on who's ahead, viewed as a function of campaign tactics, which are meticulously reported. Candidates' statements are evaluated not for their truth but for their adroitness; it is assumed, without a hint of embarrassment, that a political candidate who levels with voters disqualifies himself from being taken seriously, like a racehorse that tries to hug the outside of the track. News coverage of a political campaign is oriented to a public that enjoys competitive sports, not to one that is civic-minded. We saw this in the coverage of the selection of Justice Sandra Day O'Connor's successor. It was played as an election campaign; one article even described the jockeying for the nomination by President Bush as the ''primary election'' and the fight to get the nominee confirmed by the Senate the ''general election'' campaign. With only a few exceptions, no attention was paid to the ability of the people being considered for the job or the actual consequences that the appointment was likely to have for the nation. Does this mean that the news media were better before competition polarized them? Not at all. A market gives people what they want, whether they want the same thing or different things. Challenging areas of social consensus, however dumb or even vicious the consensus, is largely off limits for the media, because it wins no friends among the general public. The mainstream media do not kick sacred cows like religion and patriotism. Not that the media lie about the news they report; in fact, they have strong incentives not to lie. Instead, there is selection, slanting, decisions as to how much or how little prominence to give a particular news item. Giving a liberal spin to equivocal economic data when conservatives are in power is, as the Harvard economists Sendhil Mullainathan and Andrei Shleifer point out, a matter of describing the glass as half empty when conservatives would describe it as half full. Journalists are reluctant to confess to pandering to their customers' biases; it challenges their self-image as servants of the general interest, unsullied by commerce. They want to think they inform the public, rather than just satisfying a consumer demand no more elevated or consequential than the demand for cosmetic surgery in Brazil or bullfights in Spain. They believe in ''deliberative democracy'' - democracy as the system in which the people determine policy through deliberation on the issues. In his preface to ''The Future of Media'' (a collection of articles edited by Robert W. McChesney, Russell Newman and Ben Scott), Bill Moyers writes that ''democracy can't exist without an informed public.'' If this is true, the United States is not a democracy (which may be Moyers's dyspeptic view). Only members of the intelligentsia, a tiny slice of the population, deliberate on public issues. The public's interest in factual accuracy is less an interest in truth than a delight in the unmasking of the opposition's errors. Conservatives were unembarrassed by the errors of the Swift Boat veterans, while taking gleeful satisfaction in the exposure of the forgeries on which Dan Rather had apparently relied, and in his resulting fall from grace. They reveled in Newsweek's retracting its story about flushing the Koran down a toilet yet would prefer that American abuse of prisoners be concealed. Still, because there is a market demand for correcting the errors and ferreting out the misdeeds of one's enemies, the media exercise an important oversight function, creating accountability and deterring wrongdoing. That, rather than educating the public about the deep issues, is their great social mission. It shows how a market produces a social good as an unintended byproduct of self-interested behavior. The limited consumer interest in the truth is the key to understanding why both left and right can plausibly denounce the same media for being biased in favor of the other. Journalists are writing to meet a consumer demand that is not a demand for uncomfortable truths. So a newspaper that appeals to liberal readers will avoid expos?s of bad behavior by blacks or homosexuals, as William McGowan charges in ''Coloring the News''; similarly, Daniel Okrent, the first ombudsman of The New York Times, said that the news pages of The Times ''present the social and cultural aspects of same-sex marriage in a tone that approaches cheerleading.'' Not only would such expos?s offend liberal readers who are not black or homosexual; many blacks and homosexuals are customers of liberal newspapers, and no business wants to offend a customer. But the same liberal newspaper or television news channel will pull some of its punches when it comes to reporting on the activities of government, even in Republican administrations, thus giving credence to the left critique, as in Michael Massing's ''Now They Tell Us,'' about the reporting of the war in Iraq. A newspaper depends on access to officials for much of its information about what government is doing and planning, and is reluctant to bite too hard the hand that feeds it. Nevertheless, it is hyperbole for Eric Alterman to claim in ''What Liberal Media?'' that ''liberals are fighting a near-hopeless battle in which they are enormously outmatched by most measures'' by the conservative media, or for Bill Moyers to say that ''the marketplace of political ideas'' is dominated by a ''quasi-official partisan press ideologically linked to an authoritarian administration.'' In a sample of 23 leading newspapers and newsmagazines, the liberal ones had twice the circulation of the conservative. The bias in some of the reporting in the liberal media, acknowledged by Okrent, is well documented by McGowan, as well as by Bernard Goldberg in ''Bias'' and L. Brent Bozell III in ''Weapons of Mass Distortion.'' Journalists minimize offense, preserve an aura of objectivity and cater to the popular taste for conflict and contests by - in the name of ''balance'' - reporting both sides of an issue, even when there aren't two sides. So ''intelligent design,'' formerly called by the oxymoron ''creation science,'' though it is religious dogma thinly disguised, gets almost equal billing with the theory of evolution. If journalists admitted that the economic imperatives of their industry overrode their political beliefs, they would weaken the right's critique of liberal media bias. The latest, and perhaps gravest, challenge to the journalistic establishment is the blog. Journalists accuse bloggers of having lowered standards. But their real concern is less high-minded - it is the threat that bloggers, who are mostly amateurs, pose to professional journalists and their principal employers, the conventional news media. A serious newspaper, like The Times, is a large, hierarchical commercial enterprise that interposes layers of review, revision and correction between the reporter and the published report and that to finance its large staff depends on advertising revenues and hence on the good will of advertisers and (because advertising revenues depend to a great extent on circulation) readers. These dependences constrain a newspaper in a variety of ways. But in addition, with its reputation heavily invested in accuracy, so that every serious error is a potential scandal, a newspaper not only has to delay publication of many stories to permit adequate checking but also has to institute rules for avoiding error - like requiring more than a single source for a story or limiting its reporters' reliance on anonymous sources - that cost it many scoops. Blogs don't have these worries. Their only cost is the time of the blogger, and that cost may actually be negative if the blogger can use the publicity that he obtains from blogging to generate lecture fees and book royalties. Having no staff, the blogger is not expected to be accurate. Having no advertisers (though this is changing), he has no reason to pull his punches. And not needing a large circulation to cover costs, he can target a segment of the reading public much narrower than a newspaper or a television news channel could aim for. He may even be able to pry that segment away from the conventional media. Blogs pick off the mainstream media's customers one by one, as it were. And bloggers thus can specialize in particular topics to an extent that few journalists employed by media companies can, since the more that journalists specialized, the more of them the company would have to hire in order to be able to cover all bases. A newspaper will not hire a journalist for his knowledge of old typewriters, but plenty of people in the blogosphere have that esoteric knowledge, and it was they who brought down Dan Rather. Similarly, not being commercially constrained, a blogger can stick with and dig into a story longer and deeper than the conventional media dare to, lest their readers become bored. It was the bloggers' dogged persistence in pursuing a story that the conventional media had tired of that forced Trent Lott to resign as Senate majority leader. What really sticks in the craw of conventional journalists is that although individual blogs have no warrant of accuracy, the blogosphere as a whole has a better error-correction machinery than the conventional media do. The rapidity with which vast masses of information are pooled and sifted leaves the conventional media in the dust. Not only are there millions of blogs, and thousands of bloggers who specialize, but, what is more, readers post comments that augment the blogs, and the information in those comments, as in the blogs themselves, zips around blogland at the speed of electronic transmission. This means that corrections in blogs are also disseminated virtually instantaneously, whereas when a member of the mainstream media catches a mistake, it may take weeks to communicate a retraction to the public. This is true not only of newspaper retractions - usually printed inconspicuously and in any event rarely read, because readers have forgotten the article being corrected - but also of network television news. It took CBS so long to acknowledge Dan Rather's mistake because there are so many people involved in the production and supervision of a program like ''60 Minutes II'' who have to be consulted. The charge by mainstream journalists that blogging lacks checks and balances is obtuse. The blogosphere has more checks and balances than the conventional media; only they are different. The model is Friedrich Hayek's classic analysis of how the economic market pools enormous quantities of information efficiently despite its decentralized character, its lack of a master coordinator or regulator, and the very limited knowledge possessed by each of its participants. In effect, the blogosphere is a collective enterprise - not 12 million separate enterprises, but one enterprise with 12 million reporters, feature writers and editorialists, yet with almost no costs. It's as if The Associated Press or Reuters had millions of reporters, many of them experts, all working with no salary for free newspapers that carried no advertising. How can the conventional news media hope to compete? Especially when the competition is not entirely fair. The bloggers are parasitical on the conventional media. They copy the news and opinion generated by the conventional media, often at considerable expense, without picking up any of the tab. The degree of parasitism is striking in the case of those blogs that provide their readers with links to newspaper articles. The links enable the audience to read the articles without buying the newspaper. The legitimate gripe of the conventional media is not that bloggers undermine the overall accuracy of news reporting, but that they are free riders who may in the long run undermine the ability of the conventional media to finance the very reporting on which bloggers depend. Some critics worry that ''unfiltered'' media like blogs exacerbate social tensions by handing a powerful electronic platform to extremists at no charge. Bad people find one another in cyberspace and so gain confidence in their crazy ideas. The conventional media filter out extreme views to avoid offending readers, viewers and advertisers; most bloggers have no such inhibition. The argument for filtering is an argument for censorship. (That it is made by liberals is evidence that everyone secretly favors censorship of the opinions he fears.) But probably there is little harm and some good in unfiltered media. They enable unorthodox views to get a hearing. They get 12 million people to write rather than just stare passively at a screen. In an age of specialization and professionalism, they give amateurs a platform. They allow people to blow off steam who might otherwise adopt more dangerous forms of self-expression. They even enable the authorities to keep tabs on potential troublemakers; intelligence and law enforcement agencies devote substantial resources to monitoring blogs and Internet chat rooms. And most people are sensible enough to distrust communications in an unfiltered medium. They know that anyone can create a blog at essentially zero cost, that most bloggers are uncredentialed amateurs, that bloggers don't employ fact checkers and don't have editors and that a blogger can hide behind a pseudonym. They know, in short, that until a blogger's assertions are validated (as when the mainstream media acknowledge an error discovered by a blogger), there is no reason to repose confidence in what he says. The mainstream media, by contrast, assure their public that they make strenuous efforts to prevent errors from creeping into their articles and broadcasts. They ask the public to trust them, and that is why their serious errors are scandals. A survey by the National Opinion Research Center finds that the public's confidence in the press declined from about 85 percent in 1973 to 59 percent in 2002, with most of the decline occurring since 1991. Over both the longer and the shorter period, there was little change in public confidence in other major institutions. So it seems there are special factors eroding trust in the news industry. One is that the blogs have exposed errors by the mainstream media that might otherwise have gone undiscovered or received less publicity. Another is that competition by the blogs, as well as by the other new media, has pushed the established media to get their stories out faster, which has placed pressure on them to cut corners. So while the blogosphere is a marvelous system for prompt error correction, it is not clear whether its net effect is to reduce the amount of error in the media as a whole. But probably the biggest reason for declining trust in the media is polarization. As media companies are pushed closer to one end of the political spectrum or the other, the trust placed in them erodes. Their motives are assumed to be political. This may explain recent Pew Research Center poll data that show Republicans increasingly regarding the media as too critical of the government and Democrats increasingly regarding them as not critical enough. Thus the increase in competition in the news market that has been brought about by lower costs of communication (in the broadest sense) has resulted in more variety, more polarization, more sensationalism, more healthy skepticism and, in sum, a better matching of supply to demand. But increased competition has not produced a public more oriented toward public issues, more motivated and competent to engage in genuine self-government, because these are not the goods that most people are seeking from the news media. They are seeking entertainment, confirmation, reinforcement, emotional satisfaction; and what consumers want, a competitive market supplies, no more, no less. Journalists express dismay that bottom-line pressures are reducing the quality of news coverage. What this actually means is that when competition is intense, providers of a service are forced to give the consumer what he or she wants, not what they, as proud professionals, think the consumer should want, or more bluntly, what they want. Yet what of the sliver of the public that does have a serious interest in policy issues? Are these people less well served than in the old days? Another recent survey by the Pew Research Center finds that serious magazines have held their own and that serious broadcast outlets, including that bane of the right, National Public Radio, are attracting ever larger audiences. And for that sliver of a sliver that invites challenges to its biases by reading The New York Times and The Wall Street Journal, that watches CNN and Fox, that reads Brent Bozell and Eric Alterman and everything in between, the increased polarization of the media provides a richer fare than ever before. So when all the pluses and minuses of the impact of technological and economic change on the news media are toted up and compared, maybe there isn't much to fret about. Richard A. Posner is a judge on the United States Court of Appeals for the Seventh Circuit, a senior lecturer at the University of Chicago Law School and, along with the economist Gary Becker, the author of [3]The Becker-Posner Blog. References 3. http://www.becker-posner-blog.com/ --------------- Up Front http://www.nytimes.com/2005/07/31/books/review/31UPFRONT.html By THE EDITORS How does Richard A. Posner do it? A federal appeals court judge, a senior lecturer at the University of Chicago Law School, an editor of The American Law and Economics Review and a [3]blogger, he is the author of 38 books, more than 300 articles and book reviews (including [4]one, in these pages last year, of the 9/11 Commission Report), and almost 2,200 published judicial opinions. One reaches for science fiction explanations: Posner has cloned himself; he has found a way to slow down time. Surely it's the case that he never sleeps. Posner may be inhumanly prolific, but he is neither formulaic nor superficial. In books like [5]''The Frontiers of Legal Theory,'' [6]''Catastrophe: Risk and Response'' and [7]''An Affair of State: The Investigation, Impeachment and Trial of President Clinton,'' he ranges widely, mastering a vast array of material, from economics, literature and philosophy to sex and aging. He is also the founder of an influential school of legal interpretation. In an [8]essay on the credibility of the news media, ''Bad News,'' Posner weaves his way through the arguments of left and right with his predictable unpredictability, providing a surprisingly nonpolitical perspective on a very political subject. References 3. http://www.becker-posner-blog.com/ 4. http://www.nytimes.com/2004/08/29/books/review/29POSNERL.html 5. http://www.nytimes.com/books/01/06/24/reviews/010624.24ryersot.html 6. http://www.nytimes.com/2005/01/02/books/review/02SINGERL.html 7. http://www.nytimes.com/library/books/990926.26sullivt.html 8. http://www.nytimes.com/2005/07/31/books/review/31POSNER.html From checker at panix.com Sat Jul 30 15:25:45 2005 From: checker at panix.com (Premise Checker) Date: Sat, 30 Jul 2005 11:25:45 -0400 (EDT) Subject: [Paleopsych] CSM: New twist on 'phishing' scam - 'pharming' Message-ID: New twist on 'phishing' scam - 'pharming' http://www.csmonitor.com/2005/0505/p13s01-stin.htm [Thanks to Laird for this.] from the May 05, 2005 edition - By [2]Gregory M. Lamb | Staff writer of The Christian Science Monitor "The pharmers are coming! The pharmers are coming!" Hang warning lanterns all over the Internet: It's under attack by a new scam. For two years users have been hearing about "phishing," the sending of bogus e-mails - allegedly from a bank or other online business - by criminals who hope to hook the unwary. Those who bite by clicking on a hyperlink in the e-mail are shipped off to a phony but authentic-looking website and asked to enter sensitive information. If they type in their passwords or account numbers, thieves have that data. Now phishers have been joined by "pharmers," who have made the ruse more sophisticated by planting a seed of malicious software in the user's own computer - or poisoning servers that direct traffic on the Internet. The result: Even if you type in the correct address of a website, the software can send you to a bogus one. "It's a rapidly growing threat, and one we've been seeing a lot more discussion about" among Internet security experts and people in the banking industry, says Lance Cottrell, founder and president of Anonymizer Inc. in San Diego, an Internet privacy and security firm. Phishing attacks "rely on some gullibility of and participation by the victims," Mr. Cottrell says, since they must be persuaded to click on a link within the e-mail. But not clicking on such links "is no protection against a pharming attack." Here's how the scam works. The thieves rely on the fact that the word address you use, such as www.my-bank.com, is connected to a distinct numerical address. Just like a phone number, it routes your browser to the right website. Pharming replaces the number with a fraudulent one, sending you to a criminal site instead of the real one. Besides keeping antivirus and antispyware programming up to date on their PC, users have few other ways to defend themselves from pharming. But any website that is conducting financial transactions should be able to maintain a secure website, Internet security experts say. The corner of the browser should display a padlock symbol, and the address in the address bar should begin with "https," not simply "http." Are you being scammed? To determine if you're at the real site, click on the lock symbol and make sure it displays the address you are expecting to be at, says Mikko Hypp?nen, chief research officer of F-Secure, an Internet security company in Helsinki, Finland. But another kind of pharming, sometimes called "domain spoofing," "domain poisoning," or "cache poisoning," attacks the servers that route traffic around the Internet. These so-called domain name system (DNS) servers also link the word address to its underlying numerical address. To corrupt a DNS "takes significantly more expertise, more access" than attacking PCs, says Peter Cassidy, secretary-general of the Anti-Phishing Working Group, which has offices in Cambridge, Mass., and Menlo Park, Calif. That's why thieves first will try to get into individual computers. "They're the low-hanging fruit," he says. But "they'll try anything that works." Some servers are hard to crack, he says, but others don't keep their defenses up-to-date. Unlike the traditional landline telephone system, which was built from the outset to be a commercial enterprise, the Internet was designed to make sharing of information between scholars and researchers fast and easy, not for secure financial transactions. "It was built in a laboratory by guys who knew each other and married each other's sisters," Mr. Cassidy says. Now new layers of security continually must be added, as criminals probe for weak points. Spreading fraud The Anti-Phishing Working Group reports that the number of new phishing messages rose by an average 38 percent per month in the last six months of 2004. And pharming was one of the top five Internet scams in March 2005, says a recent report from the National Cyber-Forensics & Training Alliance, a nonprofit arm of the Direct Marketing Association. Internet fraud in general, which includes phishing and pharming, cost merchants $2.6 billion in 2004, $700 million more than in 2003, according to CyberSource Corp., which processes Internet financial transactions. While Cassidy has seen some disturbing pharming attack reports from Britain, "we haven't seen it taking over the universe," he says. "We have seen significant attacks, but not rapid proliferation, partly because it does take a little more expertise." One pharming technique is to flood the DNS server with messages to trick it into saving false information that will send users to a phony website, Cottrell says. "Then in many cases [the criminals] try to bounce you back to the real bank's website, so that you're not aware that anything has happened." Phishers and pharmers set up their fake websites for only a few days or even a few hours, then move on before they can be found out. Cottrell's company, Anonymizer, runs all its clients' Internet traffic through its own secure DNS servers, which he says can protect clients from pharming. Keyboard trouble But even if crooks can't get at your PC or the DNS server, they can always hope that you just can't spell. Early last week, F-Secure discovered that a malicious website had been set up at www.googkle.com, just one keystroke away from the famous [3]www.google.com site. Users who accidentally went to the site using the popular Internet Explorer browser immediately were inundated with spyware, adware, and other malicious software that tried to secretly load itself onto their PCs. By the end of last week, the site had disappeared. But Mr. Hypp?nen still warns people not to try to visit it out of curiosity. "These things sometimes pop up again," he says. The technique isn't new. Similar attack sites have been created just a slip of the finger away from sites such as CNN.com, AOL.com, and MSN.com, Hypp?nen says. The people behind the malicious sites can be anywhere from South Korea to Brazil to Russia. The PC operating the site could be "somebody's grandmother's computer in Canada" being remotely controlled without her knowledge, he adds. Gone 'phishing' "Phishing" means sending out official-looking e-mails to tempt users to visit a bogus website and type in personal or financial data. Here are key points from a March report: o Since July 2004, the number of websites linked to the scam rose an average 28 percent a month. o The United States hosted a third of the phishing sites - more than any other nation - followed by China (12 percent) and South Korea (9 percent). o Financial services are the most frequent target, with 4 of 5 phishers appropriating the brand of a bank or some other financial institution. o Such sites only last an average 5.8 days before they're taken down. o A new version of the scam - "pharming" - plants malicious software on PCs to direct users to bogus sites. Source: Anti-Phishing Working Group From checker at panix.com Sat Jul 30 15:25:58 2005 From: checker at panix.com (Premise Checker) Date: Sat, 30 Jul 2005 11:25:58 -0400 (EDT) Subject: [Paleopsych] NYT: (Andrew Grove) From Intel to Health Care and Beyond Message-ID: >From Intel to Health Care and Beyond http://www.nytimes.com/2005/07/30/technology/30nocera.html?pagewanted=print [Grove's "Efficiency in the Health Care Industries: A View From the Outside" appended. A great article indeed!] IN case you missed it, Andrew S. Grove has an article in this week's issue of the Journal of the American Medical Association. In his 68 years, Mr. Grove has written six books, including a management classic, "Only the Paranoid Survive"; a beautifully rendered memoir, "Swimming Across"; and a new book, published last month, composed of case studies that he uses in the class he teaches with Robert A. Burgelman, a Stanford University professor, entitled "Strategic Dynamics: Concepts and Cases." But it's not every day that he makes an appearance in an eminent medical journal like JAMA. The article, "Efficiency in the Health Care Industries," was labeled commentary, but it was more akin to a jeremiad. Mr. Grove took dead aim at the lack of efficiency in health care - the amount of time it takes a research lab to turn an idea into a working drug, for instance; and the extent to which medicine lags behind other industries in using technology to store and retrieve data, to the detriment of doctors and patients. He compared it unfavorably to an industry he knows rather intimately, microchips, which has turned efficiency into an art, thanks in no small part to Mr. Grove. The article signaled that Mr. Grove's obsession with the problems in the health care industry, problems he first explored in a 1996 Fortune magazine article about his battle with prostate cancer, has not waned. It signaled something else as well: Mr. Grove has been keeping plenty busy in retirement. Did you know that Mr. Grove, one of Silicon Valley's most iconic and influential figures, has retired from [3]Intel? Well, O.K., retirement may be a bit strong: he still has a desk at Intel, where he describes his current role as "internal agitator." (His official title is senior adviser to executive management.) But on May 18, at Intel's annual meeting, Mr. Grove resigned as chairman of the board. For the first time since 1979, when he was named its president, Intel's fortunes are not Mr. Grove's responsibility. Although the meeting was, in part, a public retirement party for Mr. Grove, the news garnered surprisingly little attention in the East Coast business media. ONE of the great joys of being a business journalist over the last quarter century has been the chance to listen to Andy Grove. As president, chief executive and finally chairman of Intel, he would periodically make the rounds of the business magazines and the business sections of the big newspapers, where he would sit in a conference room and take questions from the reporters and editors. Yes, of course, he gave us the Intel spin. But unlike most C.E.O.'s, programmed like robots to stay on message, Mr. Grove was willing to share his thoughts on all manner of things. With his wide-ranging intellect and his engagement with the world, he broadened our understanding of technology, strategy, the fall of communism (he escaped from communist Hungary at the age of 20), and dozens of other topics. I last sat in on one of his jam sessions maybe three years ago, and I've missed them. So a few weeks ago, I decided to bring the mountain to Mohammed. I went to visit him in Silicon Valley, to see what he was up to (audio excerpts of my interview with Mr. Grove are at [4]nytimes.com/business/columns). It turns out that he's up to quite a bit. "My mind is spinning as fast as it did then," he said, comparing his new life to his old in his mellifluous Hungarian accent. "But I'm not in meetings all day. I have the ability to pick and choose what I do, which I never had in my life. The penalty is that I deal with issues that are mammoth." We met at the office of his foundation, which, among other things, is financing stem cell research ("We are helping to keep U.S. stem cell programs limping along," he said), and trying to develop programs that will help people who are not college-bound acquire vocational skills to allow them to earn a decent living. He talked about his quest to find what he called "the Rosetta code" for the health care industry. By that he means the development of software "that takes incompatible systems and translates them into each other, so that one system can automatically read the other." He thinks there are few things more important for patients than to have any doctor, anywhere, be able to access their medical records, but because the industry is so fragmented, with so many records still in paper form, that is currently impossible. At Intel, most of his time is spent with a new health care group, where he pushes and prods and argues with its members as they try to figure out how to bring this laggard industry into the technological age - and with any luck, make some money for Intel in the process. We talked a bit about the central ideas in his new book, which examines what happens when a particular business environment suddenly changes and industries collide, as when, for instance, digital technology turned the music industry upside down. Mr. Grove, not surprisingly, had mainly contempt for the music industry's early efforts to keep the digital wave from coming to shore. "If the new technology is compelling enough," he said, "it will win out. When the railroads came, Wells Fargo was in trouble. When the printing press came along, the monks didn't stay around very long." Music, telephony, media: they've all faced the same disruptions, and in Mr. Grove's view they are all going to have to adapt - or else. At the annual meeting last May, he laughingly described the line "Technology will always win" as "Grove's Law." Then he moved to the subject of his latest obsession: globalization. Will it surprise you to know that this refugee from Hungary, whose company derives 70 percent of its revenue from places other than the United States, is a bear on the potential consequences of globalization on this country? He is. "I don't think there is a good outcome," he said. "I looked up a quote for you. 'If you don't believe that [globalization] changes the average wages in America, you believe in the tooth fairy.' Do you know who said that? Paul Samuelson, age 90." Although mainstream economic thought holds that America's history of creativity and entrepreneurialism will allow it to adapt to the rise of such emerging economies as India and China, Mr. Grove thinks that is so much wishful thinking. In his view, globalization will not only finish off what's left of American manufacturing, but will turn so-called knowledge workers, which was supposed to be America's competitive advantage, into just another global commodity. "There is an increasing trend towards lathroscopic surgery being done with robots," he said by way of example. "Once you are doing it with robotics, why do you have to be there?" The procedure might just as well be done from India. Or China. What particularly bugs Mr. Grove is that he can't see a way that this country can find the equivalent of a disruptive technology that will allow it to retain is current place atop the economic heap. He's always been someone who liked to generate big, gnarly solutions that may take years to work through, and though it may seem a tad grandiose on his part to think that he should be able to devise a way to solve America's globalization problem, it is also part of what makes him such an appealing character. "I think Intel and me and the JAMA article can move health care a few pebbles forward," he said. "This last one, I will be happy just to have some people talking about it, and legitimize it. There are no clear answers." Toward the end of the interview I asked him whether he liked his new life. "I love it," he said. "I was very ready for it. I have liked all phases of my career. I liked the technological side. I liked management. I liked discovering strategy." He likes being able to read history now, something he rarely had time for in his previous life. He likes not having to worry about every minute twist and turn in the technology industry, and how it might effect Intel. Did that last annual meeting have any special meaning? I wondered. "Nothing emotional happened that day," he replied. But, he added, he has been having dreams lately about Intel. "It is as if I'm reliving an event that happened when I was operations manager 25 years ago. It is not speeches, not limelight, just factory visits and arguments, which didn't really happen. I didn't used to have these dreams. So I have a lot of feelings. "I see a lot of Intel retirees," he added. "They keep company with each other. There is some nostalgia. I don't know if it is for the Intel of those days, or my younger self. "But not on May 18," he said. "That was just a nice event." --------------------- Efficiency in the Health Care Industries: A View From the Outside Andrew S. Grove, PhD JAMA. 2005;294:490-492. Vol. 294 No. 4, July 27, 2005 The health science/health care industry and the microchip industry are similar in some important ways: both are populated by extremely dedicated and well-trained individuals, both are based on science, and both are striving to put to use the result of this science. But there is a major difference between them, with a wide disparity in the efficiency with which results are developed and then turned into widely available products and services. To be sure, there are additional fundamental differences between the 2 industries. One industry deals with the well-defined world of silicon, the other with living human beings. Humans are incredibly complex biological systems, and working with them has to be subject to safety, legal, and ethical concerns. Nevertheless, it is helpful to mine this comparison for every measure of learning that can be found. First, there are important differences between health care and microchip industries in terms of research efficiency. This year marks the 40th anniversary of a construct widely known as Moore's Law, which predicts that the number of transistors that can be practically included on a microchip doubles every year. This law has been a guiding metric of the rate of technology development.1 According to this metric, the microchip industry has reached a state in which microchips containing many millions of transistors are shipped to the worldwide electronics industry in quantities that are measured in the billions per month. By contrast, a Fortune magazine article suggested that the rate of progress in the "war on cancer" during the same 40 years was slow.2 The dominant cause for this discrepancy appears to lie in the disparate rates of knowledge turns between the 2 industries. Knowledge turns are indicators of the time it takes for an experiment to proceed from hypothesis to results and then lead to a new hypothesis and a new result. The importance of rapid knowledge turns is widely recognized in the microchip industry. Techniques for early evaluation are designed and implemented throughout the development process. For example, simple electronic structures, called test chips, are incorporated alongside every complex experiment. The test chips are monitored as an experiment progresses. If they show negative results, the experiment is stopped, the information is recorded, and a new experiment is started. This concept is also well known in the health sciences. It is embodied in the practice of futility studies, which are designed to eliminate drugs without promise. A recent example of the use of futility studies for this purpose is the exercise of narrowing the list of putative neuroprotective agents before launching a major randomized clinical trial.3 The difference is this: whereas the surrogate "end point" in the case of microchip development--the test chip failure--is well defined, its equivalent in the health sciences is usually not. Most clinical trials fall back on an end point that compares the extent by which a new drug or therapy extends life as compared with the current standard treatment. Reaching this end point usually takes a long time; thus, knowledge turns are slow. In many instances, a scientist's career can continue only through 2 or 3 such turns. The result is wide-scale experimentation with animal models of dubious relevance, whose merit principally lies in their short lifespan. If reliable biomarkers existed that track the progression of disease, their impact on knowledge turns and consequently on the speed of development of treatments and drugs could be dramatic. Even though such biomarkers could have a profound effect on medical research efficiency, biomarker development efforts seem far too low. Although precise numbers are difficult to come by, in my estimation, in the microchip industry, research into development, test, and evaluation methods represents about 10% of total research and development budgets. This 10% is taken off the top, resulting in less actual product development than the engineers, marketers, or business managers would like. But an understanding that this approach will lead to more rapid knowledge turns protects this allocation from the insatiable appetite of the business. The National Institutes of Health (NIH) budget is about $28 billion a year.4 It seems unlikely that anywhere near 10%--$2.8 billion--is spent on biomarker development. A second difference between the microchip and health science industries is the rate at which hard-fought scientific results are "brought to market"--produced in volume in the case of microchips or translated into clinical use in the case of medicine. A key factor in accelerating the movement of discoveries from the research laboratory to marketplace (or from bench to bedside) is the nature of the facilities in which translational work is performed. The world of business has many stories of failures of organizational designs that impede technology transfer. The classical research laboratory, isolated and protected from the chaos and business-driven urgencies of production units, often led to disappointing results. For example, when Intel started, the leadership resolved to operate without the traditional separation of development from production, which worked remarkably well for quite some time. Developers had to compete for resources with the business-driven needs of production, but their efforts were more than compensated by the ease with which new technology, developed on the production line, could be made production worthy. Today, an evolution of this resource-sharing principle continues in the microchip industry. Dedicated developmental factory units are designed from the ground up with the aim of eventually turning them into production units. They are overbuilt for the needs of development, but once development is completed, the facility is filled with equipment and people and transformed into a production unit in a matter of months. Although overbuilding for the development phase costs more initially, the savings in efficiency of moving products to production more than make up for this initial outlay. Medical facilities are designed for a variety of purposes, ranging from outpatient clinics to surgical centers, from general hospitals to tertiary hospitals. There is room for a translational hospital designed from the ground up with the mission of speeding new developments toward usage in general hospitals. These hospitals would be flexible, equipped for capability of extra monitoring, ready to deal with emergencies--all extra costs but likely to be made up by the resulting increase in translational efficiency. Some examples exist, such as the NIH Clinical Center. Some cancer centers have adopted changes in hospital design that are steps in this direction. However, much more needs to be done before these designs are evaluated and an optimal approach is adopted and proliferated throughout the health care industry. When it comes to operational efficiency, nothing illustrates the chasm between the 2 industries better than a comparison of the rate of implementation of electronic medical records with the rate of growth of electronic commerce (e-commerce). Common estimates suggest that no more than 15% to 20% of US medical institutions use any form of electronic records systems.5 By contrast, during the last 10 years, more than $20 trillion worth of goods and services have been bought and sold over the Internet (A. Bartels, written communication, June 2005). e-Commerce started in the era of mainframe computers. It required specialized software, created and owned by the participants (so-called proprietary software). To link buyer with seller, each had to have the same software. The software was expensive and difficult to modify and maintain. Consequently, the use of e-commerce was limited to a few large companies. The Internet changed all that. Computing became standardized, driven by the volumes of substantially identical personal computers; interconnection standards were defined and implemented everywhere. A virtuous cycle evolved: standards begot large numbers of users, and the increasing numbers of users reinforced the standards. It was easy to become part of an electronic marketplace because it no longer required the installation of proprietary software and equipment. The early results were pedestrian: orders taken by telephone, manual data entry and reentry, and the use of faxes were reduced. But the benefits were spectacular. Costs and error rates plunged. Small- and medium-sized companies rushed to join the electronic marketplace, necessitating the development of a standardized software code that would translate information from one company's system to that of another, the computing version of the Rosetta stone. Although the computer industry is fairly fragmented,6 the health care industry is even more so. Like the computer industry, health care is a largely horizontally organized industry, with the horizontal layers representing patients, payers, physicians, and hospitals, as well as pharmaceutical and medical device companies. Standard ways of interconnecting all these constituencies are crucial. The good news is that the desire to increase internal productivity has led to at least partial deployment of information technology within the companies of many of the participants. Further good news is that the physical means of interconnecting the many participants already exists in the form of the Internet. The bad news is that with the exception of a few, large, vertically integrated health care organizations, in which participants from several layers are contained in 1 organization (as is the case with the Veterans Affairs Administration and Kaiser Permanente), the benefits of electronic information exchange are not necessarily realized by the participants in proportion to their own investment.7 The industry faces what is called in game theory the "prisoners' dilemma" all members have to act for any one member to enjoy the benefit of action. Such collective action often requires external stimulus. The year 2000 problem (ie, "Y2K") was an example of such a stimulus, causing the near-simultaneous upgrade of the worldwide computing and communications infrastructure. Although its ostensible benefit was the avoidance of a digital calamity at the turn of the century, its greatest benefit was in readying thousands of commercial organizations for the age of the Internet and e-commerce. Even though the task facing the health care industry in developing and deploying the crucial "Rosetta code" is much smaller than the task of getting ready for 2000 was, external impetus is still needed to catalyze serious action. The National Health Information Infrastructure Initiative8 demonstrates some desire to encourage progress along these lines. However, what is needed to cause the industry to act is customer demand. The largest customer--approaching half of total health care spending9--is the Medicare system. It seems that the entire health care industry would benefit if Medicare mandated the adoption of a Rosetta code for the health care industry before institutions were granted permission to participate in Medicare business. There are signs that individual consumers may be taking matters into their own hands. The proliferation of companies providing personal health record services10 is an indication of such a movement. This phenomenon has all the makings of becoming a disruptive technology.11 Disruptive technologies, usually initiated by small businesses that are new to the industry in question, can force widespread defensive actions by the much larger industry incumbents. In this case, inadequate response by the incumbents could lead to some of the emerging providers of personal health record services becoming the owners of the customer relationship--a development of considerable strategic significance to all such businesses. The health care industry in the United States represents 15% of the gross domestic product,12 and bearing its cost is a heavy burden on corporations and individuals alike. The mandate for increasing its efficiency--in research, translation, and operations--is clear. History shows that whatever technology can do, it will do. If not here, where? If not now, when? AUTHOR INFORMATION Corresponding Author: Andrew S. Grove, PhD, Intel Corporation, 2200 Mission College Blvd, Santa Clara, CA 95054 (Andy.Grove at intel.com). Financial Disclosures: Intel Corporation manufactures microprocessors and other types of microchips that can be used in health care information technology. Author Affiliation: Dr Grove is former chairman of the board of Intel Corporation, Santa Clara, Calif. REFERENCES 1. Moore G. Cramming more components onto integrated circuits. Electronics Magazine. April 19, 1965:114-117. 2. Leaf C. The war on cancer. Fortune. March 2004:76-96. 3. Elm JJ, Goetz CG, Ravina B, et al. A responsive outcome for Parkinson's disease neuroprotection futility studies. Ann Neurol. 2005;57:197-203. 4. American Association for the Advancement of Science. NIH "soft landing" turns hard in 2005: R&D funding update on R&D in the FY 2005 NIH budget. Available at: http://www.aaas.org/spp/rd/nih05p.htm. February 20, 2004. Accessed June 20, 2005. 5. Manhattan Research. Taking the Pulse v 5.0: Physician and Emerging Information Technologies. New York, NY: Manhattan Research; April 12, 2005. Available at: http://www.manhattanresearch.com/thepulse2005.htm. Accessed June 20, 2005. 6. Grove AS. Only the Paranoid Survive. New York, NY: Doubleday; 1996:42. 7. Pearl R, Meza P, Burgelman RA. Better Medicine Through Information Technology. Stanford, Calif: Stanford Graduate School of Business; 2004. Case study SM136. 8. Stead WW, Kelly BJ, Kolodner RM. Achievable steps toward building a national health information infrastructure in the United States. J Am Inform Assoc. 2005;12:113-120. ISI 9. Cowan C, Catlin A, Smith C, Sensenig A. National health expenditures, 2002. Health Care Financ Rev. 2004;25:143-166. 10. Markle Foundation. The Personal Health Working Group final report. Connecting for health: a public-private collaboration [appendix 2]. New York, NY: Markle Foundation; July 2003. Available at: http://www.connectingforhealth.org/resources/final_phwg_report1.pdf. Accessed June 20, 2005. 11. Christensen CM. The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail. Boston, Mass: Harvard Business School Press; 1997. 12. 2004 CMS Statistics . Baltimore, Md: US Dept of Health and Human Services, Centers for Medicare & Medicaid Services, Office of Research, Development and Information; 2004. CMS Pub No. 03455. Available at: http://www.cms.hhs.gov/researchers/pubs/CMSstatistics/2004CMSstat.pdf. Accessed June 20, 2005. From checker at panix.com Sat Jul 30 15:26:08 2005 From: checker at panix.com (Premise Checker) Date: Sat, 30 Jul 2005 11:26:08 -0400 (EDT) Subject: [Paleopsych] Bob Whitaker: Benjamin Franklin c/o Time Warp Mail Service Message-ID: Benjamin Franklin c/o Time Warp Mail Service Bob's Blog - WhitakerOnline.org ? 7/30/05 Insider Letter http://whyjohnny.com/blog/?p=639 Dear Mr. Franklin, You are facing extremely serious legal problems. 1) Your invention of bifocal lens. You have no qualifications whatsoever in the fields of optometry or ophthalmology. You are ordered to cease and desist from the use or discussion of this product. Lawsuits have been lodged against you by people whose bifocals have broken and gashed their skins. Others say that they confuse the eyes and cause double vision. 2) Your invention of the stove Your Franklin Stove has caused serious injury to a very large number of people. Children playing have bumped into it and been burned by it. You have no Federally-approve set of directions for its use, so you are personally responsible for every accident that occurs in using your product. 3) Your discovery of the Gulf Stream As with optometry in the case of your invention of bifocals, you are practicing meteorology with no degree or other qualifications in the subject. While no one has yet been able to formulate an actual lawsuit against you on this subject, you have made a laughing-stock of yourself by going outside the field of printing, where you do have some actual credentials. You are in deep trouble in other areas. Your comments about Quakers, Indians and other minority groups were definitely Hate Speech. You are charged with manslaughter and armed robbery in aiding and abetting in the robbery of America from the Native Americans. Other charges are pending. Yours Indignantly, The Association of Experts, Lawyers, Professors and Other Authorities in the Year 2005 From checker at panix.com Sat Jul 30 15:28:16 2005 From: checker at panix.com (Premise Checker) Date: Sat, 30 Jul 2005 11:28:16 -0400 (EDT) Subject: [Paleopsych] NYT: Astronomers Find Another Planet in Solar System Message-ID: Astronomers Find Another Planet in Solar System http://www.nytimes.com/2005/07/29/science/29cnd-planet.html By [3]KENNETH CHANG and DENNIS OVERBYE Add a 10th planet to the solar system - or possibly subtract one. Astronomers announced today that they had found a lump of rock and ice that is larger than Pluto and the farthest known object in the solar system. The discovery will likely rekindle debate over the definition of "planet" and whether Pluto should still be regarded as one. The new object - as yet unnamed, but temporarily designated as 2003 UB313 - is currently 9 billion miles away from the Sun, or 97 times as far away as the Earth and about three times Pluto's current distance from the Sun. But its 560-year elliptical orbit also brings it as close as 3.3 billion miles. Pluto's orbit ranges between 2.7 billion and 4.6 billion miles. The astronomers do not have an exact size for the new planet, but its brightness and distance tell them that it is at least as large as Pluto. "It is guaranteed bigger than Pluto," said Michael E. Brown, a professor of planetary astronomy at Caltech and a member of the team that made the discovery. "Even if it were 100 percent reflective, it would be larger than Pluto. It can't be more than 100 percent reflective." The discovery was made Jan. 8 using a 48-inch telescope at Palomar Observatory in California. Dr. Brown and the other members of the team - Chadwick A. Trujillo of the Gemini Observatory in Hawaii and David L. Rabinowitz of Yale University - then found that they had, unknowingly, taken images of the planet taken as far back as 2003. Last year, the same team announced the discovery of a distant body they named Sedna, which, until the latest discovery, had held the title of farthest known object in the solar system. Dr. Brown said they had a name they had in mind for the planet, but did not want to disclose it publicly until it had been formally proposed to the International Astronomical Union. "We have a name we really like, and we want it to stick," he said. Informally, the astronomers have been calling it "Xena" after the television series about a Greek warrior princess, which was popular when the astronomers began their systematic sweep of the sky in 2000. "Because we always wanted to name something Xena," Dr. Brown said. The astronomers were not able to see 2003 UB313 using NASA's Spitzer Space Telescope, which looks at longer-wavelength infrared light. That means the planet is less than 1,800 miles in diameter. What is most surprising is that the orbit of the planet is sharply skewed to most of the rest of the solar system. The orbits of the most of the planets lie close to the same plane as the Earth's, known as the ecliptic plane. The orbit of 2003 UB313 is tilted by 44 degrees. "That blows my mind," said Harold Levison of the Southwest Research Institute in Boulder, Colo., who was not involved in the discovery. "Getting something up that high is very hard." The object is also the third brightest in the Kuiper Belt, a ring of icy bodies that circles beyond the orbit of Neptune. The new planet could have been easily discovered much sooner if anyone had looked at that part of the sky. "It's because no one looks that far off the ecliptic," Dr. Brown said. "No one expects to have an inclination that high." Another group of astronomers led by Jose-Luis Ortiz at the Sierra Nevada Observatory in Spain announced that they had found a large Kuiper Belt object, designated 2003 EL61, that they thought could be Pluto-size or larger. Dr. Brown's group had been observing the same body but had not announced it, and their observations had already pinpointed a moon circling 2003 EL61, which constrained the size of the body to about 70 percent the diameter of Pluto. On his Web site, Dr. Brown wrote that the Spanish group deserved credit, saying they had gambled that no one else would find the planet. "We were wrong!" he said. Dr. Brown had still hoped to hold back announcements of 2003 UB313 and another large Kuiper Belt object, 2005 FY9, until October, but his hand was tipped by Brian Marsden, director of the Minor Planet Center, who said that he was worried about hanky panky. Dr. Marsden said that it was possible by looking on the Internet at the logs of one of the telescopes Dr. Brown's team had been using to find out where they had been pointed. He had evidence, he said, that someone had done that and computed crude orbits of the two unannounced planetoids, "presumably" in preparation for their own observations. "I was shocked to find this kind of information was available on the Web," Dr. Marsden said. He urged Dr. Brown to announce his findings. "I was suspicious and I warned him," he said. "We try to give credit where credit is due. Brown's team deserves a lot of credit for carrying out this program." Astronomers suddenly have three large bodies at its outskirts, and with one of them larger than Pluto, the debate over what is a planet will likely revive. Astronomers do not have a formal definition for planet, and many have said that if Pluto had been discovered today, it would not have been called a planet. The first of the smaller Kuiper Belt objects were discovered in 1992, more than half a century after Clyde Tombaugh found Pluto. The Minor Planet Center proposed in 1999 that Pluto, while maintaining its position among the major planets, also be given an official designation among the Kuiper Belt objects. The center dropped the proposal after outcry from those who saw "dual status" as a demotion. Mr. Williams said he still supported bestowing dual status to Pluto. But thought that 2003 UB313 should not be added to the registry of major planets. "Leave it as a minor planet permanently," he said. Mark V. Sykes, one of the defenders of Pluto's planetary status, said his initial inclination would be name 2003 UB313 a planet, too. He wondered whether it had an atmosphere and what sort of geological processes generated its apparently bright surface. "The kinds of questions we would ask about this object would be planet-like questions," said Dr. Sykes, director of the Planetary Sciences Institute, a private research institution based in Tucson, Ariz. Astronomers will also have to figure out how the body made it to its current skewed orbit. "It makes Pluto seem less weird," Dr. Sykes said. Dr. Brown said he used to support Mr. Williams's view of Pluto as a minor planet, but "I've given up on that." After if Pluto, for historical reasons, has been grandfathered in as a planet, "This one, I would say, counts out as the 10th planet," Dr. Brown said. Five and a half years ago, Dr. Brown bet an astronomer friend, Sabine Aireau, five bottles of good Champagne that he would find a Kuiper Belt object larger than Pluto by the end of last year. In December, having failed, he bought the five bottles of Champagne to send her. Then 2003 UB313 turned up on Jan. 8. "I lost the bet by eight days," Dr. Brown said, but "she graciously decided she would let that window slide and I would win the bet." He added, "That means I get to drink 10 bottles of good Champagne, and I think I will." From christian.rauh at uconn.edu Sat Jul 30 18:19:19 2005 From: christian.rauh at uconn.edu (Christian Rauh) Date: Sat, 30 Jul 2005 14:19:19 -0400 Subject: [Paleopsych] Re: on islam In-Reply-To: <128.61c67403.301c74d9@aol.com> References: <128.61c67403.301c74d9@aol.com> Message-ID: <42EBC4A7.20704@uconn.edu> Some argue that the great achievements of the Arab world were really pre-islamic and that islam put a break on innovation which was center to middle eastern culture before it. Things still were good for a while but that was only some inertia from the old times as it took centuries to change things in those days. Islamic religion changed the characteristics of that culture (or culture groups) from decentralized and autonomous to centralized and controlled. The first seemed to be a better option for progress. What is striking to me is that the US seems to be going the same way - in a counter-"terrorist" movement with islamic roots, this country is absorbing the worst features of its "enemy". As a disclaimer, I know little about Islam, the above is from what I've heard. Anyone with better knowledge should point erros and elucidate. Christian HowlBloom at aol.com wrote: > In a message dated 7/29/2005 11:40:19 AM Eastern Standard Time, > psaartist at hotmail.com writes: > > Sorry about that. I just put it into the text of this email- > > > Hello Howard, > I?m afraid this week I have failed outright as a research assistant, but I > have been ruminating and writing down a few impressions about Islam with > your project in mind. The following ideas were written down over the course > of several days and are in varying degrees of wholeness. I hope some of > them might help you to see the shapes of your own ideas, either in the light > of my good ones or against the shadows of my bad ones. > I did find some dissidents in Islam through my brief search on Google: > http://www.islamreview.com/articles/islamapostasy.shtml > http://www.bharat-rakshak.com/SRR/Volume11/gayatri.html > I think someone in the online conference asked for a Muslim Ghandi. The > second of these links answers to that, the Muslim ?Frontier Ghandi?. > Reading their stories tends to make Islam look worse rather than better, > though. Basically because of the lies, threats, violence and hypocrisy they > had to try to endure. In some cases with Western collaboration. > Fundamentally, every power system instinctively hates a dissident. > I think Islam has a different ?flavor?, a different character when it > comes to ideas of authority, law, censure, belonging and transgression. Its > basic spirit, to which it so often returns, seems to be legalistic (subject > to interpretation and manipulation) and authoritarian rather than humane. It > is closed and fearful of challenge rather than open. It is not merciless, > but even the power of mercy is used as a demonstration of authority. > Part of this comes from the tribal and ethnic environments where Islam > took hold. > hb: these are extremely good observations. Punishment and robotic devotion > to the duties of the religion are prized. Creativity and individuality are > minimized. In Bloomian terms, it's a religion whose conformity enforcers > outweigh its diversity generators. That may well be why it has fared so poorly > against the West ever since the Industrial Revolution and the Scientific > Spirit blossomed in Europe around 1750. > > > > The West has trouble dealing with Islam because the West has trouble > dealing with religion. Philosophically or ethically, Christian or Jewish > fundamentalists can face fundamentalist Islam only on the most impoverished > terms- ?You are devils, we are angels?. The contest is therefore left to be > decided in terms of pure violence. Liberal Westerners, secularists, avoid > making any definitive criticism of a religion or a religious person because > that would violate ideas of understanding, tolerance, inclusivenes, and > humility. In a sense they are determined not to seek a decision to the > contest. Each half of the West, the religious and the secular, is hamstrung > because it knows it can not count on the support of the other. I?m sure the > Jihadists are delighted to observe this. > hb: extremely good observations, Peter. > > > The attractions of the Enlightenment, the positive example, are being > dimmed and perverted by the excesses (or the essence) of the war on Terror, > which is a radical, lawless, barbarizing Western Jihad created by > fundamentalists. > One might wish for a tough, clear-minded secularist player on the scene. > hb: I'm trying to be one voice for this approach. But my voice is a mere > squeak at this point, limited primarily to books and to a fairly steady run of > national talk radio appearances. > > > The EU? The French, who ban all forms of religious expression from their > schools? I don?t know how sucessful any of their efforts are. > A side question- given economic prosperity, education, and a fairly > stable long term social situation, will people from Muslim cultures behave > like a mirror image of secularist westerners? > hb: which Western model do you mean-- > > 1) the revolutionary Marxists who clubbed the brains out of the Russians and > the Chinese until 1989 and killed 80 million people in the process, but who > fed the need of humans in their teens and twenties to rebel against their > parents? > > 2) or the folks who have gone to their jobs and been part of "the system", a > system whose rich, rich rewards they fail to see? Opening the eyes of > Westerners to the benefits and future uplift of "the system" is what Reinventing > Capitalism: Putting Soul In the Machine--A Radical Reperception of Western > Civlization is all about. That's my half-completed next book. > > One might be considered a > racist for asking the question, of for necessarily expecting either a yes or > a no answer to it. > hb: but this social ritual we call "condemning racism" is another of the > perceptual throttles you refer to above. To paraphrase you, we forbid certain > forms of thought in order to maintain what we cherish, pluralism, tolerance, > and free speech. And we are easily led to think that thoughts vital to our > survival fall into the forbidden zones...even when they don't. Islam is not a > race! > > To put it another way, is the Western secularist ?flavor? > a sort of inevitable world-historical ideal? > hb: nothing is inevitable. Winston Churchill proved that with a sufficient > exertion of will and perseverance, we humans can change history. Lenin > before him proved the same thing. And the rise of the ideas of a relatively > anonymous thinker, Carl Marx, proves that multi-generational projects kept aloft > by generations of stubborn persistence can make massive differences in the > path that social evolution takes. Even if those paths lead to dead ends 150 > years down the line (from 1848, when Marx and Engels issued the Communist > Manifesto to 1989, when the Berlin Wall fell). > > Is it an idea of the same > species as the inevitable Marxist worldwide revolution? That is, an > assumption taken for granted by its adherents (the way fish don?t notice the > water they swim in), but about to encounter a big world that has other > plans? Me, I don?t have an answer to these questions. > hb: in the terms of my second book, Global Brain, the battle between groups, > the battle between civilizations, represents a test of alternative > hypotheses in the group mind. Unfortunately, the groups that fare best in battle are > sometimes the worst at running a society at peace. But that easy > generalization may not be true. The society that wins is the one with the greatest > will, the highest morale, and the most unending supply of resources. Germany and > Japan ran out of resources. The Allies had America, a resource bonanza way > back then. So the Allies won WWII. > > Same thing happens in contests between lizards, lobsters, or stags for > dominance. The animal with the highest degree of confidence and the largest > reserve of resources wins. Which leads to a question. What is confidence? > > Based on the work of Neil Greenberg with anolis lizards, I'd say confidence > is an emotional and perceptual setting that allows a creature: > > 1) to see opportunities in slices of reality others would regard as > threatening > 2) to maintain a sense of perceived control > 3) to hang on to the serotonin and dopamine settings these perceptions > produce > 4) to avoid the non-stop flood of stress hormones that poison an animal's > ability to outdo others at shows of majesty, decisiveness, calm under pressure, > and implied menace > 5) to use the stress hormones only in actual combat, when those hormones are > arousers, not poisons > > In other words, perception, physiology, group organization, and resources > work hand in hand to produce winners and losers, winners of intergroup > tournaments or winners of personal struggles. > > In humans, it helps to have a worldview that allows you percieve the riches > in what you've got and that help you see how what you do contributes to > something larger than yourself, something that uplifts your people as a group and > that uplifts the individual people around you. If you've got that, you can > tap the hormonal cocktail of idealism. > > What's more, your perceptions influence your resources. The Tasmanians died > when they'd hunted down all the land animals on their island. They died of > starvation. Why? Their worldview, their collective perceptual system, told > them that the animals of the sea, fish and other seafood, were inedible trash. > > The Japanese lived on island more impoverished than those of the Tasmanians. > But they saw everything around them as a useable resource. So the > Tasmanian perceptual system killed its people off. The Japanese perceptual system > has been a winner for roughly 1,300 years (roughly the same amount of time > that Islam has been a worldview using the people it manipulates, empowers, and > motivates as a test vehicle). > > Islam is poor at seeing resources in rubble. But it's very good at organized > violence and unconventional warfare. Can a worldview that impoverishes its > people to stoke their sense of victimization and their need for revenge, for > justice, and for the purity of god's own laws beat a worldview that has > created relative wealth even for its poorest citizens? (One of our local homeless > men gets his food and coffee at Starbucks and gourmet delis, owns a bicycle, > supports the luxury of god-knows-what-self-destructive-habit, and has access > to trash that's the equivalent of treasure even to me.) > > Only if this civilization fails to perceive the riches it creates--the > spiritual riches that come from "consumerism" and "materialism". And only if this > civlization fails to perceive the riches in what it now discards as > trash--passion, emotion, and empathy--the things that we need most to upgrade the > jobs we go to every day and to upgrade the companies that give us those jobs. > > That, too, is the goal of Reinventing Capitalism. To get us Tasmanians to > see the riches in the seas around us. > > > > Is Islam at all compatible with democracy? > hb: yes. The Iranians are proving that, despite the headlock the > conservative mullahs have on the current government...and may have for another ten > years or so, but may eventually lose as the old guard of the 1979 Revolution dies > out. The Lebanese are now trying to prove that Islam and democracy can work > together, too. > > That remains to be seen. Where > democracy and Islam mix, will one of them have to become essentially > denatured? > hb: bureaucracy, whether it's at the DMV here in Brooklyn or in the > government of Hosni Mubarak in Cairo, needs the perceptual upgrade of reinventing > capitalism. Bureaucracies have to be restructured so that bureaucrats know that > their task is to use their hearts--their empathic passions--to defend and > advance other humans. > > > > What are today the various layers in Islam of what is, what is desired, > and what cannot be expressed? > > hb: Islam is out to achieve its "just place" in the hierarchy of > cultures--the number one slot. That's what God has promised. That's what God says > Islam must be--number one, top dog. And God has said that if that requires > "making wide slaughter in the land" (a key phrase and a key message in the Koran > and the Hadith, the additional Islamic holy books), then so be it. Social > standing often means more than food and water to individual humans and to human > groups. > > > Howard, in one of the Sunday night online conferences you described Islam > as having an anti-art stance, and you seemed to see Persian representational > (that is, pictorial) art as a minor exception that only serves to prove the > rule. It is indeed true that many Islamic cultures prohibit any earthly > thing from being depicted, and some prohibit music to varying degrees. > However, a moment?s reflection should bring to mind the many centuries of > Islamic development in many arts, in some areas to the highest point. For > example, weaving, architecture, design, ceramics, poetry, music, > calligraphy. The Quran itself in its recital is a consummate work of > literary, poetic and performative art. > hb: I've been counting Islam's contributions today, and compared to those of > the West they are scant. Islam has given us fabulous architecture, > architecture based on Western models, fabulous calligraphy, and one fabulous > book--The Thousand and One Arabian Nights. The Koran is considered the epitome of > literature, God's own verses, in the world of Islam. But read it in English > and it comes across as a primitive hash. > > Many of the things attributed to Arabs and Islam are borrowings--Arab > numerals, for instance, which are Indian, not Arabic. However Islamic culture > provided a vital transit point that quickened the commerce, the interchange, of > styles and ideas, giving westerners the silks of China, the ceramics of China, > the mathematics of India, and the literature and philosophy of classical > Greece (a literature the West lost track of until the tenth century, when it > trickled from Moslem Spain into Christian Europe). Arabs invented a new form of > sea-faring, using the triangular sail (the lateen sail) to tack into the > wind and inventing a way to harvest the catastrophe of the Monsoon winds to make > annual trips by sea from Oman and Yemen to India and to the Spice Islands, > the islands of Java, Malacca, the Maldives, Sumatra, Aceh, the Philippines, > and Zanzibar, not to mention the ports of Mombassa. > > They invented a commerce in black African slaves that defies belief. We > Westerners uprooted ten million black Africans and used them in our slave > business. That is appalling and is justly labeled a "Black Holocaust". Moslem > traders from Arabia and India uprooted 140 million black Africans. That's > fourteen African Holocausts! > > The Western slave trade imposed such monstrous conditions on its captives > that one out of every ten seized from their homes died somewhere in transit. > The Moslem slave trade imposed such monstrous conditions that only nine out of > ten Africans attacked and/or captured DIED. That brings the death toll of > of the Black Islamic Holocaust up to the level of 120 Black Western > Holocausts. 126 million deaths--the number inflicted by Moslem slave traders and slave > raiders--is the equivalent of 21 of the Holocausts inflicted on the Jews by > the Nazis. It's twice the combined death tolls of World War I and of World > War II--the two most industrialized uses of killing machines known to man, > wars in which two atomic bombs were loosed on civilian populations. > > And we are supposed to believe that decrying this turning of more than half > a continent into a killing field, this mass merchandising of black humans in > which all males were killed or castrated, this mass deportation of a race in > dhows packed so solidly with human cargo that many of those crammed into the > seafaring vessels of Arab merchants died of suffocation, this trade whose ship > captains combed their cargo before entering a port to search out the weak > and the ill, then to throw this faulty merchandise overboard to avoid paying > import taxes on humans too feeble to sell, this trade in females and young > boys as sex slaves, we are supposed to believe that denouncing this or even > researching its details is a racist crime? > > And we are told this by apartheid states like Saudi Arabia which I, as a > Jew, can not enter? Why? Because of my tribal identity, my race, my Jewish > genes, my Jewish blood, and my Jewish geneology. And we Jews are supposed to > believe that we, who often live peacefully among Arabs and Moslems as I did > when living outside the Arab town of Afullah and as I do in a Brooklyn > neighborhood riddled with mosques, mosques in which bomb plots to destroy the World > Trade Center and to destroy the New York Subway and rail system have been > hatched, we are supposed to believe that I am some sort of Nazi who lobbies for > apartheid? > > This is a violently perverse perception, one which we voluntarily enforce. > It is a system of censorship which we gladly encourage, often for the reasons > you have pinpointed, because it fits our sense of fairness and tolerance. > But is it really fair to decry our murders and to close our eyes to piles of > bodies far higher than any we have ever erected? If we are ethical and prize > human life, isn't it incumbent on us to open our eyes and to decry both > Islams's crimes and ours? > > Or are we here to inflict so much guilt on ourselves that we kill the > civilization that has given even Moslems in the slums of Cairo TVs and radios. > Should we really condemn the mix of capitalism and open criticism that has given > spoiled Moslem middle class and rich kids like Osama bin Laden and his foot > soldiers computers and cell phones? Should we despise the civilization that > has brought ordinary Japanese, Koreans, Taiwanese, Thais, Philippinos, > Indians, and, now, Chinese from starvation to wealth beyond the power of 19th > Century kings? Should we overthrow a Western system that has produced the > anti-slavery movement, the anti-imperialism movement, the human rights movement, the > environmental movement, Greenpeace, Amnesty International, the ACLU, NASA, > solar energy, hybrid vehicles, and the first steps toward a possible hydrogen > economy? > > This Islamic material is what I'm working on for the Tenth Anniversary > Edition of The Lucifer Principle: A Scientific Expedition Into the Forces of > History. And the reperception of the Western System is the raison d'etre of > Reinventing Capitalism: Putting Soul In the Machine--A Radical Reperception of > Western Civilization. > > All thanks for some stunningly good insights, Peter. > > I've read the rest and have made one more comment. You are a good, good > thinker. But this is where my energy ebbed. Onward > > > > The reason I raise this issue is that I see a danger of a sort of > ?momentum? of negative criticism when one looks at Islam and its many > problems. To find oneself convinced that Islamic expression is against art > is to have lost one?s bearings in the argument. That would be equvalent to > thinking that Chartres cathedral is against art. Perhaps the issue in > Islamic art that offends a child of the enlightenment, (I will presume to > put us in that category) is that Islamic expression seems consistently to be > against independence. This issue, so important to us, may make us look at > the work as deficient and backward. I think what we are really seeing is > that the work totally refuses to participate in the Western modern project. > I?m thinking of that project as secular, humanist, trying to explore without > a predetermined destination. As much of the world has taken on this > modernist (and post-modern, etc.) quality, and as things in general are made > industrually rather than by hand, Islamic art has been uprooted and stifled > and as far as I know hasn?t produced anything of fulfilling greatness in our > era. One exception could be Islamic (Sufi) music, which seems to be in quite > a healthy state, as the late Nusrat Fateh Ali Khan and others have recently > and abundantly demonstrated. > > On learning and writing about Islam- > I have gone through a couple of amateurish phases of trying to learn > about Islam, and have always ended up feeling that my conclusions have > missed the essential point of what I was trying to grasp. To say that Islam > is multi-dimensional is an understatement: I?ll say that it is beyond > multidimensional. In a way it is even misleading to say that Islam is ?a > religion?, much less to say that it is ?a religion politicised? or anything > along those lines. I think it is a phenomenon that transcends ?religion? at > all times and under all circumstances. This is because the essence of its > project is the necessary capture and integration of the social, ethical, > ritual, religious, spoken, gender, philosophical, legal, familial, and other > spheres, on and on. (I am not here making any claims about the justness or > even the real feasibility of this project.) > hb: as Osama says, Islam is about unity, it is about one-ness. One God. > One code of laws. Encorporating all aspects of human life in that holy One -- ????????????????????????????????????????????????????????????????????? ~ I G N O R A N C E ~ The trouble with ignorance is precisely that if a person lacks virtue and knowledge, he's perfectly satisfied with the way he is. If a person isn't aware of a lack, he can not desire the thing which he isn't aware of lacking. Symposium (204a), Plato _____________________________________________________________________ ????????????????????????????????????????????????????????????????????? From HowlBloom at aol.com Sun Jul 31 02:38:54 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Sat, 30 Jul 2005 22:38:54 EDT Subject: [Paleopsych] Re: on islam Message-ID: <1e5.415d9b1d.301d93be@aol.com> Christian--You have a point. Islam was conceived as a global, boundariless religion. That conception was first born in 624 AD when Allah granted Mohammed the right to Jihad. It expanded in 629, when Mohammed sent letters to the six rulers of the empires of the world that he knew inviting these emperors to Islam and implying that if they didn't accept the invitation, Allah and his forces on earth, the Moslems, would be forced to destroy them. Its message was emphasized in 632 AD when Mohammed, on his deathbed, ordered an attack on the Byzantine Empire. This attack, by the way, was just one of many. In ten years Mohammed commanded 65 military campaigns, campaigns of conquest that brought the entire Arab Peninsula to Islam. Mohammed fought in 27 of those campaigns himself, slicing and killing other humans. Which is why he is called a prophet of the sword. And why it is said in the Koran that paradise can only be achieved "in the shadow of swords". But the fact remains that this is a rapidly globalizing world, a world that cries out both for and against a central order. Mohammed invented the creed for such an order, and the world of Islam has refined it during 1,290 years of operation under a central caliphate. Now men like Osama want to revive that "new world order", that global caliphate. As many of Osama's supporters and predecessors have said, the time is ripe for such a thing. I hope that we Westerners offer a more appealing alternative. Howard In a message dated 7/30/2005 2:20:49 PM Eastern Standard Time, christian.rauh at uconn.edu writes: Some argue that the great achievements of the Arab world were really pre-islamic and that islam put a break on innovation which was center to middle eastern culture before it. Things still were good for a while but that was only some inertia from the old times as it took centuries to change things in those days. Islamic religion changed the characteristics of that culture (or culture groups) from decentralized and autonomous to centralized and controlled. The first seemed to be a better option for progress. What is striking to me is that the US seems to be going the same way - in a counter-"terrorist" movement with islamic roots, this country is absorbing the worst features of its "enemy". As a disclaimer, I know little about Islam, the above is from what I've heard. Anyone with better knowledge should point erros and elucidate. ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Sun Jul 31 03:29:55 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sat, 30 Jul 2005 20:29:55 -0700 Subject: [Paleopsych] Re: on islam Message-ID: <01C59545.745F1E80.shovland@mindspring.com> First, we need to kill Osama. Steve Hovland www.stevehovland.net -----Original Message----- From: HowlBloom at aol.com [SMTP:HowlBloom at aol.com] Sent: Saturday, July 30, 2005 7:39 PM To: paleopsych at paleopsych.org Subject: Re: [Paleopsych] Re: on islam Christian--You have a point. Islam was conceived as a global, boundariless religion. That conception was first born in 624 AD when Allah granted Mohammed the right to Jihad. It expanded in 629, when Mohammed sent letters to the six rulers of the empires of the world that he knew inviting these emperors to Islam and implying that if they didn't accept the invitation, Allah and his forces on earth, the Moslems, would be forced to destroy them. Its message was emphasized in 632 AD when Mohammed, on his deathbed, ordered an attack on the Byzantine Empire. This attack, by the way, was just one of many. In ten years Mohammed commanded 65 military campaigns, campaigns of conquest that brought the entire Arab Peninsula to Islam. Mohammed fought in 27 of those campaigns himself, slicing and killing other humans. Which is why he is called a prophet of the sword. And why it is said in the Koran that paradise can only be achieved "in the shadow of swords". But the fact remains that this is a rapidly globalizing world, a world that cries out both for and against a central order. Mohammed invented the creed for such an order, and the world of Islam has refined it during 1,290 years of operation under a central caliphate. Now men like Osama want to revive that "new world order", that global caliphate. As many of Osama's supporters and predecessors have said, the time is ripe for such a thing. I hope that we Westerners offer a more appealing alternative. Howard In a message dated 7/30/2005 2:20:49 PM Eastern Standard Time, christian.rauh at uconn.edu writes: Some argue that the great achievements of the Arab world were really pre-islamic and that islam put a break on innovation which was center to middle eastern culture before it. Things still were good for a while but that was only some inertia from the old times as it took centuries to change things in those days. Islamic religion changed the characteristics of that culture (or culture groups) from decentralized and autonomous to centralized and controlled. The first seemed to be a better option for progress. What is striking to me is that the US seems to be going the same way - in a counter-"terrorist" movement with islamic roots, this country is absorbing the worst features of its "enemy". As a disclaimer, I know little about Islam, the above is from what I've heard. Anyone with better knowledge should point erros and elucidate. ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net << File: ATT00007.html >> << File: ATT00008.txt >> From waluk at earthlink.net Sun Jul 31 03:47:39 2005 From: waluk at earthlink.net (Gerry) Date: Sat, 30 Jul 2005 20:47:39 -0700 Subject: [Paleopsych] Re: on islam In-Reply-To: <01C59545.745F1E80.shovland@mindspring.com> References: <01C59545.745F1E80.shovland@mindspring.com> Message-ID: <42EC49DB.6010709@earthlink.net> A trial by jury of his peers likely will not work. Gerry Reinhart-Waller Steve Hovland wrote: >First, we need to kill Osama. > >Steve Hovland >www.stevehovland.net > > >-----Original Message----- >From: HowlBloom at aol.com [SMTP:HowlBloom at aol.com] >Sent: Saturday, July 30, 2005 7:39 PM >To: paleopsych at paleopsych.org >Subject: Re: [Paleopsych] Re: on islam > > > >Christian--You have a point. Islam was conceived as a global, boundariless >religion. That conception was first born in 624 AD when Allah granted >Mohammed the right to Jihad. It expanded in 629, when Mohammed sent letters to the >six rulers of the empires of the world that he knew inviting these emperors >to Islam and implying that if they didn't accept the invitation, Allah and >his forces on earth, the Moslems, would be forced to destroy them. Its message >was emphasized in 632 AD when Mohammed, on his deathbed, ordered an attack >on the Byzantine Empire. > >This attack, by the way, was just one of many. In ten years Mohammed >commanded 65 military campaigns, campaigns of conquest that brought the entire Arab >Peninsula to Islam. Mohammed fought in 27 of those campaigns himself, >slicing and killing other humans. Which is why he is called a prophet of the >sword. And why it is said in the Koran that paradise can only be achieved "in >the shadow of swords". > >But the fact remains that this is a rapidly globalizing world, a world that >cries out both for and against a central order. Mohammed invented the creed >for such an order, and the world of Islam has refined it during 1,290 years >of operation under a central caliphate. Now men like Osama want to revive >that "new world order", that global caliphate. As many of Osama's supporters >and predecessors have said, the time is ripe for such a thing. > >I hope that we Westerners offer a more appealing alternative. Howard > > >In a message dated 7/30/2005 2:20:49 PM Eastern Standard Time, >christian.rauh at uconn.edu writes: > >Some argue that the great achievements of the Arab world were really >pre-islamic and that islam put a break on innovation which was center to >middle eastern culture before it. Things still were good for a while but >that was only some inertia from the old times as it took centuries to >change things in those days. Islamic religion changed the >characteristics of that culture (or culture groups) from decentralized >and autonomous to centralized and controlled. The first seemed to be a >better option for progress. What is striking to me is that the US seems >to be going the same way - in a counter-"terrorist" movement with >islamic roots, this country is absorbing the worst features of its "enemy". > >As a disclaimer, I know little about Islam, the above is from what I've >heard. Anyone with better knowledge should point erros and elucidate. > > > > > >---------- >Howard Bloom >Author of The Lucifer Principle: A Scientific Expedition Into the Forces of >History and Global Brain: The Evolution of Mass Mind From The Big Bang to the >21st Century >Recent Visiting Scholar-Graduate Psychology Department, New York University; >Core Faculty Member, The Graduate Institute >www.howardbloom.net >www.bigbangtango.net >Founder: International Paleopsychology Project; founding board member: Epic >of Evolution Society; founding board member, The Darwin Project; founder: The >Big Bang Tango Media Lab; member: New York Academy of Sciences, American >Association for the Advancement of Science, American Psychological Society, >Academy of Political Science, Human Behavior and Evolution Society, International >Society for Human Ethology; advisory board member: Institute for >Accelerating Change ; executive editor -- New Paradigm book series. >For information on The International Paleopsychology Project, see: >www.paleopsych.org >for two chapters from >The Lucifer Principle: A Scientific Expedition Into the Forces of History, >see www.howardbloom.net/lucifer >For information on Global Brain: The Evolution of Mass Mind from the Big >Bang to the 21st Century, see www.howardbloom.net > > << File: ATT00007.html >> << File: ATT00008.txt >> >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > From shovland at mindspring.com Sun Jul 31 04:02:23 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sat, 30 Jul 2005 21:02:23 -0700 Subject: [Paleopsych] Re: on islam Message-ID: <01C59549.FBD44170.shovland@mindspring.com> I think we need to do 3 things to reduce terrorism: Decapitation: take out the leadership. Uprooting: work on the real grievances of the people Noise reduction: shut down the religious schools, or at least change their curiculum. Steve Hovland www.stevehovland.net -----Original Message----- From: Gerry [SMTP:waluk at earthlink.net] Sent: Saturday, July 30, 2005 8:48 PM To: The new improved paleopsych list Subject: Re: [Paleopsych] Re: on islam A trial by jury of his peers likely will not work. Gerry Reinhart-Waller Steve Hovland wrote: >First, we need to kill Osama. > >Steve Hovland >www.stevehovland.net > > >-----Original Message----- >From: HowlBloom at aol.com [SMTP:HowlBloom at aol.com] >Sent: Saturday, July 30, 2005 7:39 PM >To: paleopsych at paleopsych.org >Subject: Re: [Paleopsych] Re: on islam > > > >Christian--You have a point. Islam was conceived as a global, boundariless >religion. That conception was first born in 624 AD when Allah granted >Mohammed the right to Jihad. It expanded in 629, when Mohammed sent letters to the >six rulers of the empires of the world that he knew inviting these emperors >to Islam and implying that if they didn't accept the invitation, Allah and >his forces on earth, the Moslems, would be forced to destroy them. Its message >was emphasized in 632 AD when Mohammed, on his deathbed, ordered an attack >on the Byzantine Empire. > >This attack, by the way, was just one of many. In ten years Mohammed >commanded 65 military campaigns, campaigns of conquest that brought the entire Arab >Peninsula to Islam. Mohammed fought in 27 of those campaigns himself, >slicing and killing other humans. Which is why he is called a prophet of the >sword. And why it is said in the Koran that paradise can only be achieved "in >the shadow of swords". > >But the fact remains that this is a rapidly globalizing world, a world that >cries out both for and against a central order. Mohammed invented the creed >for such an order, and the world of Islam has refined it during 1,290 years >of operation under a central caliphate. Now men like Osama want to revive >that "new world order", that global caliphate. As many of Osama's supporters >and predecessors have said, the time is ripe for such a thing. > >I hope that we Westerners offer a more appealing alternative. Howard > > >In a message dated 7/30/2005 2:20:49 PM Eastern Standard Time, >christian.rauh at uconn.edu writes: > >Some argue that the great achievements of the Arab world were really >pre-islamic and that islam put a break on innovation which was center to >middle eastern culture before it. Things still were good for a while but >that was only some inertia from the old times as it took centuries to >change things in those days. Islamic religion changed the >characteristics of that culture (or culture groups) from decentralized >and autonomous to centralized and controlled. The first seemed to be a >better option for progress. What is striking to me is that the US seems >to be going the same way - in a counter-"terrorist" movement with >islamic roots, this country is absorbing the worst features of its "enemy". > >As a disclaimer, I know little about Islam, the above is from what I've >heard. Anyone with better knowledge should point erros and elucidate. > > > > > >---------- >Howard Bloom >Author of The Lucifer Principle: A Scientific Expedition Into the Forces of >History and Global Brain: The Evolution of Mass Mind From The Big Bang to the >21st Century >Recent Visiting Scholar-Graduate Psychology Department, New York University; >Core Faculty Member, The Graduate Institute >www.howardbloom.net >www.bigbangtango.net >Founder: International Paleopsychology Project; founding board member: Epic >of Evolution Society; founding board member, The Darwin Project; founder: The >Big Bang Tango Media Lab; member: New York Academy of Sciences, American >Association for the Advancement of Science, American Psychological Society, >Academy of Political Science, Human Behavior and Evolution Society, International >Society for Human Ethology; advisory board member: Institute for >Accelerating Change ; executive editor -- New Paradigm book series. >For information on The International Paleopsychology Project, see: >www.paleopsych.org >for two chapters from >The Lucifer Principle: A Scientific Expedition Into the Forces of History, >see www.howardbloom.net/lucifer >For information on Global Brain: The Evolution of Mass Mind from the Big >Bang to the 21st Century, see www.howardbloom.net > > << File: ATT00007.html >> << File: ATT00008.txt >> >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From kendulf at shaw.ca Sun Jul 31 04:42:58 2005 From: kendulf at shaw.ca (Val Geist) Date: Sat, 30 Jul 2005 21:42:58 -0700 Subject: [Paleopsych] Re: on islam References: <1e5.415d9b1d.301d93be@aol.com> Message-ID: <001c01c5958a$53694300$873e4346@yourjqn2mvdn7x> Dear Howard, Thank you for bringing this to our attention! This and other matters pertaining to the history of Islam needs to become a matter of public gossip! Sincerely, Val Geist ----- Original Message ----- From: HowlBloom at aol.com To: paleopsych at paleopsych.org Sent: Saturday, July 30, 2005 7:38 PM Subject: Re: [Paleopsych] Re: on islam Christian--You have a point. Islam was conceived as a global, boundariless religion. That conception was first born in 624 AD when Allah granted Mohammed the right to Jihad. It expanded in 629, when Mohammed sent letters to the six rulers of the empires of the world that he knew inviting these emperors to Islam and implying that if they didn't accept the invitation, Allah and his forces on earth, the Moslems, would be forced to destroy them. Its message was emphasized in 632 AD when Mohammed, on his deathbed, ordered an attack on the Byzantine Empire. This attack, by the way, was just one of many. In ten years Mohammed commanded 65 military campaigns, campaigns of conquest that brought the entire Arab Peninsula to Islam. Mohammed fought in 27 of those campaigns himself, slicing and killing other humans. Which is why he is called a prophet of the sword. And why it is said in the Koran that paradise can only be achieved "in the shadow of swords". But the fact remains that this is a rapidly globalizing world, a world that cries out both for and against a central order. Mohammed invented the creed for such an order, and the world of Islam has refined it during 1,290 years of operation under a central caliphate. Now men like Osama want to revive that "new world order", that global caliphate. As many of Osama's supporters and predecessors have said, the time is ripe for such a thing. I hope that we Westerners offer a more appealing alternative. Howard In a message dated 7/30/2005 2:20:49 PM Eastern Standard Time, christian.rauh at uconn.edu writes: Some argue that the great achievements of the Arab world were really pre-islamic and that islam put a break on innovation which was center to middle eastern culture before it. Things still were good for a while but that was only some inertia from the old times as it took centuries to change things in those days. Islamic religion changed the characteristics of that culture (or culture groups) from decentralized and autonomous to centralized and controlled. The first seemed to be a better option for progress. What is striking to me is that the US seems to be going the same way - in a counter-"terrorist" movement with islamic roots, this country is absorbing the worst features of its "enemy". As a disclaimer, I know little about Islam, the above is from what I've heard. Anyone with better knowledge should point erros and elucidate. ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net ------------------------------------------------------------------------------ _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych ------------------------------------------------------------------------------ No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.338 / Virus Database: 267.9.7/60 - Release Date: 7/28/2005 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Sun Jul 31 04:56:54 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sat, 30 Jul 2005 21:56:54 -0700 Subject: [Paleopsych] Re: on islam Message-ID: <01C59551.9A343710.shovland@mindspring.com> Don't forget that there were vast reserves of gold in Africa that fueled their growth for a long time. They traded a lot of it to India via Madagascar. Steve Hovland www.stevehovland.net -----Original Message----- From: Val Geist [SMTP:kendulf at shaw.ca] Sent: Saturday, July 30, 2005 9:43 PM To: The new improved paleopsych list Subject: Re: [Paleopsych] Re: on islam Dear Howard, Thank you for bringing this to our attention! This and other matters pertaining to the history of Islam needs to become a matter of public gossip! Sincerely, Val Geist ----- Original Message ----- From: HowlBloom at aol.com To: paleopsych at paleopsych.org Sent: Saturday, July 30, 2005 7:38 PM Subject: Re: [Paleopsych] Re: on islam Christian--You have a point. Islam was conceived as a global, boundariless religion. That conception was first born in 624 AD when Allah granted Mohammed the right to Jihad. It expanded in 629, when Mohammed sent letters to the six rulers of the empires of the world that he knew inviting these emperors to Islam and implying that if they didn't accept the invitation, Allah and his forces on earth, the Moslems, would be forced to destroy them. Its message was emphasized in 632 AD when Mohammed, on his deathbed, ordered an attack on the Byzantine Empire. This attack, by the way, was just one of many. In ten years Mohammed commanded 65 military campaigns, campaigns of conquest that brought the entire Arab Peninsula to Islam. Mohammed fought in 27 of those campaigns himself, slicing and killing other humans. Which is why he is called a prophet of the sword. And why it is said in the Koran that paradise can only be achieved "in the shadow of swords". But the fact remains that this is a rapidly globalizing world, a world that cries out both for and against a central order. Mohammed invented the creed for such an order, and the world of Islam has refined it during 1,290 years of operation under a central caliphate. Now men like Osama want to revive that "new world order", that global caliphate. As many of Osama's supporters and predecessors have said, the time is ripe for such a thing. I hope that we Westerners offer a more appealing alternative. Howard In a message dated 7/30/2005 2:20:49 PM Eastern Standard Time, christian.rauh at uconn.edu writes: Some argue that the great achievements of the Arab world were really pre-islamic and that islam put a break on innovation which was center to middle eastern culture before it. Things still were good for a while but that was only some inertia from the old times as it took centuries to change things in those days. Islamic religion changed the characteristics of that culture (or culture groups) from decentralized and autonomous to centralized and controlled. The first seemed to be a better option for progress. What is striking to me is that the US seems to be going the same way - in a counter-"terrorist" movement with islamic roots, this country is absorbing the worst features of its "enemy". As a disclaimer, I know little about Islam, the above is from what I've heard. Anyone with better knowledge should point erros and elucidate. ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net ------------------------------------------------------------------------ ------ _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych ------------------------------------------------------------------------ ------ No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.338 / Virus Database: 267.9.7/60 - Release Date: 7/28/2005 << File: ATT00009.html >> << File: ATT00010.txt >> From shovland at mindspring.com Sun Jul 31 05:07:34 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sat, 30 Jul 2005 22:07:34 -0700 Subject: [Paleopsych] Gold Trade and the Kingdom of Ancient Ghana Message-ID: <01C59553.17075D70.shovland@mindspring.com> http://www.metmuseum.org/toah/hd/gold/hd_gold.htm Around the fifth century, thanks to the availability of the camel, Berber-speaking people began crossing the Sahara Desert. From the eighth century onward, annual trade caravans followed routes later described by Arabic authors with minute attention to detail. Gold, sought from the western and central Sudan, was the main commodity of the trans-Saharan trade. The traffic in gold was spurred by the demand for and supply of coinage. The rise of the Soninke empire of Ghana appears to be related to the beginnings of the trans-Saharan gold trade in the fifth century. >From the seventh to the eleventh century, trans-Saharan trade linked the Mediterranean economies that demanded gold-and could supply salt-to the sub-Saharan economies, where gold was abundant. Although local supply of salt was sufficient in sub-Saharan Africa, the consumption of Saharan salt was promoted for trade purposes. In the eighth and ninth centuries, Arab merchants operating in southern Moroccan towns such as Sijilmasa bought gold from the Berbers, and financed more caravans. These commercial transactions encouraged further conversion of the Berbers to Islam. Increased demand for gold in the North Islamic states, which sought the raw metal for minting, prompted scholarly attention to Mali and Ghana, the latter referred to as the "Land of Gold." For instance, geographer al-Bakri described the eleventh-century court at Kumbi Saleh, where he saw gold-embroidered caps, golden saddles, shields and swords mounted with gold, and dogs' collars adorned with gold and silver. The Soninke managed to keep the source of their gold (the Bambuk mines, most notably) secret from Muslim traders. Yet gold production and trade were important activities that undoubtedly mobilized hundreds of thousands of African people. Leaders of the ancient kingdom of Ghana accumulated wealth by keeping the core of pure metal, leaving the unworked native gold to be m arketed by their people. Gold Trade and the Mali Empire By 1050 A.D., Ghana was strong enough to assume control of the Islamic Berber town of Audaghost. By the end of the twelfth century, however, Ghana had lost its domination of the western Sudan gold trade. Trans-Saharan routes began to bypass Audaghost, expanding instead toward the newly opened Bure goldfield. Soso, the southern chiefdom of the Soninke, gained control of Ghana as well as the Malinke, the latter eventually liberated by Sundiata Keita, who founded the Mali empire. Mali rulers did not encourage gold producers to convert to Islam, since prospecting and production of the metal traditionally depended on a number of beliefs and magical practices that were alien to Islam. In the fourteenth century, cowrie shells were introduced from the eastern coast as local currency, but gold and salt remained the principal mediums of long-distance trade. The flow of sub-Saharan gold to the northeast probably occurred in a steady but small stream. Mansa Musa's arrival in Cairo carrying a ton of the metal (1324-25) caused the market in gold to crash, suggesting that the average supply was not as great. Undoubtedly, some of this African gold was also used in Western gold coins. African gold was indeed so famous worldwide that a Spanish map of 1375 represents the king of Mali holding a gold nugget. When Mossi raids destroyed the Mali empire, the rising Songhai empire relied on the same resources. Gold remained the principal product in the trans-Saharan trade, followed by kola nuts and slaves. The Moroccan scholar Leo Africanus, who visited Songhai in 1510 and 1513, observed that the governor of Timbuktu owned many articles of gold, and that the coin of Timbuktu was made of gold without any stamp or superscription.